Experience of 4+ years, predominantly on Data Architect, Data Warehousing / Data Lake on AWS, Python, Spark, Redshift
Proven hands-on experience with spark and python is a must.
2+ plus years of developing applications using python, Data Warehousing / Data Lake,
Consumer APIs, Real-time Data pipelines/Streaming
Cloud Native development is compulsory (AWS)
Experience working with Integration Platform, Datawarehouse, Data lake, and ETL/ELT Loads.
Must be strong in coding either Java, Scala or Python worked on Integration between different sources and target systems like RDBMS(Salesforce, BB CRM, Oracle, Postgres, MySQL, SQL Server)
Extract Data from Source Systems using APIs, Webservices, or bin log files using AWS Glue using pyspark or spark with scala.
Must have experience in retrieving data from REST and SOAP API