-
Notifications
You must be signed in to change notification settings - Fork 82
Closed
Labels
enhancementNew feature or requestNew feature or request
Description
Background
Hi, Thank you for the making this wonderful package.
Currently cobrix provides the maxLength(String), MinElements(Array) and MaxElementsArray) metadata info in dataframe schema
Feature
for ASCII Files, adding the below info will help in debugging purpose for pyspark users. i see that we could use scala/java to get the corresponding info from cobol converter/parser. but we only know python/pyspark.
- adding the length,decimal precision and decimal scale for all primitive column types (decimal, integer,long,float etc..)
- start position, end position
- redefines, assumed scale, occurs_depends_on etc..
- other metadata that is already available in cobol converter
Example [Optional]
A simple example if applicable.
Proposed Solution [Optional]
Solution Ideas
- adding the existing(converter/parser) metadata information to dataframe schema metadata
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request