Data Engineer
Experience:
Salary :
Preferred Notice Period
Opportunity Type:
Placement Type:
(*Note: This is a requirement for one of Uplers' Clients)
Must have skills required :
OR
Living Things (One of Uplers' Clients) is Looking for:
Role Overview Description
About Us:
About the Role:
We are seeking a highly skilled and motivated data engineer to join our growing data team. You will play a critical role in designing, building, and maintaining our data infrastructure, enabling data-driven decision-making across the organization.
Job Responsibilities:
- Manage and optimize relational (PostgreSQL, MySQL) and NoSQL (MongoDB) databases, including performance tuning and schema evolution management.
- Leverage cloud platforms (AWS, Azure, GCP) for data storage, processing, and analysis, with a focus on optimizing cost, performance, and scalability using cloud-native services.
- Design, build, and maintain robust, scalable, and fault-tolerant data pipelines using modern orchestration tools (Apache Airflow, Apache Flink, Dagster).
- Implement and manage real-time data streaming solutions (Apache Kafka, Kinesis, Pub/Sub).
- Knowledge of BI tools (Metabase, Power BI, Looker, and QuickSight) and the ability to design data models that support efficient querying for analytical purposes.
- Collaborate closely with data scientists, analysts, and business stakeholders to understand data requirements and translate them into technical data solutions.
- Stay updated on the latest data engineering technologies and best practices, and advocate for their adoption where appropriate.
- Contribute to the development and improvement of data infrastructure and processes, including embracing DataOps principles for automation and collaboration.
- Work with containerization (e.g., Docker) and orchestration tools (e.g., Kubernetes) for deploying and managing data services.
- Implement data governance policies and practices, including data lineage and metadata management.
Skills and Qualifications:
Essential:
- Strong proficiency in Python, SQL, and MongoDB.
- Experience with relational databases (PostgreSQL, MySQL) and NoSQL databases (MongoDB). Understanding of database internals, indexing, and query optimization.
- Knowledge of data modeling, data warehousing principles, and ETL/ELT methodologies.
- Proficiency with cloud platforms (AWS, Azure, GCP), including data storage (S3, ADLS Gen2, GCS), data warehousing services (e.g., Redshift, Snowflake, BigQuery), and managed services for data processing (AWS Glue, Azure Data Factory, Google Cloud Dataflow).
- Experience with data quality and validation techniques and implementing automated data quality frameworks.
- Strong analytical and problem-solving abilities. Ability to troubleshoot complex data pipeline issues.
- Experience with BI tools (Metabase, Power BI, Looker, QuickSight) from a data provisioning perspective.
Preferred:
- Experience with Data Lake, Data Lakehouse, or Data Mesh architectures.
- Hands-on experience with data processing frameworks like Apache Spark, Apache Kafka, and stream processing technologies (Spark Streaming, Flink).
- Experience with workflow orchestration tools like Apache Airflow and Dagster.
- Understanding of DataOps and MLOps concepts and practices.
- Experience with data observability and monitoring tools.
- Excellent communication and presentation skills.
How to apply for this opportunity:
Easy 3-Step Process:
2. Upload updated Resume & Complete the Screening Form
About Uplers:
Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career.
(Note: There are many more opportunities apart from this on the portal.)
So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!