Job Description:
We are seeking an experienced and driven AWS Data Engineer to join our dynamic team. In this role, you will be responsible for designing and building data pipelines to curate sourced data into our Redshift data warehouse, maintaining data platforms using AWS tools, and contributing to enhancing data quality and reliability. You will also develop analytical tools to support Business Intelligence (BI) and Advanced Data Analytics activities, and ensure adherence to best practices and data engineering standards.
Key Responsibilities:
Design, develop, and maintain robust data pipelines to curate and integrate sourced data into the Redshift data warehouse.
Build and manage data platforms using AWS services such as Airflow, AWS Glue, and others.
Continuously explore methods to improve data quality, consistency, and reliability.
Develop tools and solutions to support BI and Advanced Data Analytics activities.
Champion data engineering best practices and promote the adoption of industry standards within the team.
Requirements:
Bachelor's or Master's degree in Computer Science, Information Technology, or a related field.
1-3 years of hands-on experience in data engineering with AWS services (S3, EMR, RDS, DynamoDB, Redshift, etc.).
Strong SQL skills and experience in data manipulation and querying.
Proficiency in at least one scripting language (Python or Java).
Experience in designing and maintaining ETL/ELT frameworks, data modeling, and data warehousing.
Familiarity with BI tools and concepts.
Hands-on experience with creating and managing data warehouses and databases (e.g., Snowflake, PostgreSQL, MySQL, MS SQL, Redshift).
Understanding of web protocols (HTTP, FTP) and web services (REST APIs).
Strong critical-thinking, problem-solving, and analytical skills.
Excellent communication skills and a collaborative, team-oriented mindset.