-
Hello everybody, I#m trying to build a little CLI tool for the members of our research institute to work with our local S3 storage. Its a custom storage provider. So far, most things went well. But I discovered a suspicious error when trying to upload larger files via multi part upload. I'm using The important code fragment is the following (unnecessary parts cut off), based on the multi part upload example from this repository: async fn put_local_to_s3(
args: CopyMoveOptions,
cfg: S3Config,
from: PathBuf,
to: S3String,
file_size: u64,
is_move: bool,
first_input_arg: &OsStr,
) -> anyhow::Result<()> {
let file = from.to_string_lossy().into_owned();
let bucket = to.s3bucket().unwrap().to_owned();
let object_key = build_object_key(args.recursive, to.s3object(), &from, first_input_arg);
if file_size > CHUNK_SIZE {
let checksum = calc_crc64_checksum(&from);
let mut chunk_count = (file_size / CHUNK_SIZE) + 1;
let mut size_of_last_chunk = file_size % CHUNK_SIZE;
if size_of_last_chunk == 0 {
size_of_last_chunk = CHUNK_SIZE;
chunk_count -= 1;
}
let mp_upload = cfg
.client()
.create_multipart_upload()
.set_bucket(to.s3bucket().map(|str| str.to_owned()))
.key(&object_key)
.checksum_algorithm(ChecksumAlgorithm::Crc64Nvme)
.checksum_type(ChecksumType::FullObject)
.send()
.await?;
let upload_id = mp_upload
.upload_id()
.ok_or(anyhow!("Couldn't get upload ID"))?;
let mut upload_parts: Vec<aws_sdk_s3::types::CompletedPart> = Vec::new();
for chunk_index in 0..chunk_count {
let this_chunk = if chunk_count - 1 == chunk_index {
size_of_last_chunk
} else {
CHUNK_SIZE
};
let stream = ByteStream::read_from()
.path(&from)
.offset(chunk_index * CHUNK_SIZE)
.length(Length::Exact(this_chunk))
.build()
.await
.unwrap();
// Chunk index needs to start at 0, but part numbers start at 1.
let part_number = (chunk_index as i32) + 1;
let upload_part_res = cfg
.client()
.upload_part()
.key(&object_key)
.set_bucket(to.s3bucket().map(|str| str.to_owned()))
.body(stream)
.part_number(part_number)
.checksum_algorithm(ChecksumAlgorithm::Crc64Nvme)
.checksum_crc64_nvme(&checksum)
.upload_id(upload_id);
let resp = upload_part_res.send().await?;
upload_parts.push(
CompletedPart::builder()
.e_tag(resp.e_tag.unwrap_or_default())
.part_number(part_number)
.build(),
);
}
let completed_multipart_upload: CompletedMultipartUpload =
CompletedMultipartUpload::builder()
.set_parts(Some(upload_parts))
.build();
let complete_multipart_upload_res = cfg
.client()
.complete_multipart_upload()
.set_bucket(to.s3bucket().map(|str| str.to_owned()))
.key(&object_key)
.multipart_upload(completed_multipart_upload)
.upload_id(upload_id)
.checksum_crc64_nvme(&checksum)
.send()
.await;
match complete_multipart_upload_res {
Ok(resp) => {
println!(
"Successfully {} {} to bucket {} with multipart upload",
operation_kind.color(SUCCESS_COLOR),
from.file_name()
.unwrap_or(OsStr::new("unknown"))
.display()
.to_string()
.color(LOCAL_COLOR)
.bold(),
to.s3bucket().unwrap().color(BUCKET_COLOR).bold()
);
if let Some(id) = resp.version_id() {
println!("Version ID: {}", id.to_string().bright_yellow().bold());
}
if is_move {
tokio::fs::remove_file(&from).await?;
}
Ok(())
}
Err(e) => Err(e.into_service_error().into()),
}
} else {
let stream = ByteStream::read_from().path(&from).build().await.unwrap();
let checksum = calc_crc64_checksum(&from);
let resp = cfg
.client()
.put_object()
.set_bucket(to.s3bucket().map(|str| str.to_owned()))
.key(&object_key)
.checksum_algorithm(ChecksumAlgorithm::Crc64Nvme)
.checksum_crc64_nvme(checksum)
.set_expires(args.expires)
.body(stream)
.send()
.await;
match resp {
Ok(resp) => {
println!(
"Successfully {} {} to bucket {}",
operation_kind.color(SUCCESS_COLOR),
from.file_name()
.unwrap_or(OsStr::new("unknown"))
.display()
.to_string()
.color(LOCAL_COLOR)
.bold(),
to.s3bucket().unwrap().color(BUCKET_COLOR).bold()
);
if let Some(id) = resp.version_id() {
println!("Version ID: {}", id.to_string().yellow());
}
if is_move {
tokio::fs::remove_file(&from).await?;
}
Ok(())
}
Err(e) => Err(e.into_service_error().into()),
}
}
} I tried some different approaches, but alltogether can't figure out whats the concrete problem. Especially since the error message somehow mentions Sha256 which is not part of this code chunk... Since the SDK is rather complex, I'm happy for any hints. Thanks |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 3 replies
-
Does this code work against the actual AWS S3 service? Which custom provider is this BTW? |
Beta Was this translation helpful? Give feedback.
Hi, thanks for the response. I know the SDK is only guaranteed to work with AWS itself.
Just was curious if this error message is known.
I was able to test it on a Dell ECS system. Where the multipart upload works as expected (although, the ECS needs to use path style while the Ocean Story uses VHS). Therefore, it seems to be Huawei related problem (the incomplete sentence of the message could be another hint)...