Jobs
Interviews

457 Etl Pipelines Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 8.0 years

10 - 18 Lacs

chandigarh

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred. Mandatory Key Skills Data analytics,ETL,SQL,Python,Google Big Query,AWS Redshift,Data architecture*

Posted 18 hours ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

pune, bengaluru

Hybrid

Position: API & Data Integration Engineer Experience: 5-8 Years Location: Gurgaon / Noida / Pune / Bangalore Type: Full-Time Team: Advanced Intelligence Work Group Contact : 9258253740 About the Role: We are seeking an experienced API & Data Integration Engineer to design, build, and maintain backend integrations across internal systems, third-party APIs, and AI copilots. The role involves API development, data pipeline engineering, no-code automation, and cloud-based architecture with a focus on scalability, security, and compliance. Key Responsibilities: Build and maintain RESTful APIs using FastAPI Integrate third-party services (CRMs, SaaS tools) Develop and manage ETL/ELT pipelines for real-time and batch data flows Automate workflows using tools like n8n, Zapier, Make.com Work with AI/ML teams to ensure clean, accessible data Ensure API performance, monitoring, and security Maintain integration documentation and ensure GDPR/CCPA compliance Must-Have Qualifications: 5-8 years in API development, backend, or data integration Strong in Python, with scripting knowledge in JavaScript or Go Experience with PostgreSQL, MongoDB, DynamoDB Hands-on with AWS/Azure/GCP and serverless tools (Lambda, API Gateway) Familiarity with OAuth2, JWT, SAML Proven experience in building and managing data pipelines Comfort with no-code/low-code platforms like n8n and Zapier Nice to Have: Experience with Kafka, RabbitMQ, Kong, Apigee Familiarity with monitoring tools like Datadog, Postman Cloud or integration certifications Tech Stack (Mandate): FastAPI, OpenAPI/Swagger, Python, JavaScript or Go, PostgreSQL, MongoDB, DynamoDB, AWS/Azure/GCP, Lambda, API Gateway, ETL/ELT pipelines, n8n, Zapier, Make.com, OAuth2, JWT, SAML, GDPR, CCPA What We Value: Strong problem-solving and debugging skills Cross-functional collaboration with Data, Product, and AI teams Get Stuff Done attitude

Posted 20 hours ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

varanasi

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred Mandatory Key SkillsETL pipelines,data warehouses,SQL,Python,AWS Redshift,Google BigQuery,ETL*

Posted 20 hours ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

coimbatore

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred Mandatory Key SkillsPython,AWS Redshift,Google BigQuery,ETL pipelines,data warehousing,data architectures,SQL*

Posted 20 hours ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

mysuru

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred. Mandatory Key SkillsData analytics,ETL,SQL,Python,Google Big Query,AWS Redshift,Data architecture*

Posted 20 hours ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

kanpur

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities Develop ETL pipelines, data warehouses, and real-time data processing systems Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery Work closely with data scientists to enhance machine learning models with structured and unstructured data Prior experience in handling large-scale datasets is preferred Mandatory Key Skillspython,data warehousing,etl,amazon redshift,bigquery,data engineering,data architecture,aws,machine learning,data flow,etl pipelines,real-time data processing,java,spring boot,microservices,spark,kafka,cassandra,scala,nosql,mongodb,rest,redis,SQL*

Posted 21 hours ago

Apply

3.0 - 8.0 years

10 - 18 Lacs

nagpur

Work from Office

Design and implement scalable data architectures to optimize data flow and analytics capabilities. Develop ETL pipelines, data warehouses, and real-time data processing systems. Must have expertise in SQL, Python, and cloud data platforms like AWS Redshift or Google BigQuery. Work closely with data scientists to enhance machine learning models with structured and unstructured data. Prior experience in handling large-scale datasets is preferred. Mandatory Key SkillsSQL,Python,data warehousing,etl,amazon redshift,bigquery,data engineering,AWS,machine learning,real-time data processing,Java,spring boot,microservices,spark,Kafka,Cassandra,Scala,NoSQL,mongodb,Redis,data architecture*

Posted 21 hours ago

Apply

2.0 - 7.0 years

9 - 15 Lacs

bengaluru

Hybrid

Urgent Hiring in Genpact for Senior Data Engineer Location: Bangalore, Genpact office Prestige Park Shift Timings- 12 PM to 10 PM IST Work Mode: Hybrid Permanent role Looking for candidate with immediate to 30 days' notice period Title: Senior Data Engineer AWS | ETL | SQL | Python Job Description: We are looking for a Senior Data Engineer (2+ years with strong expertise in AWS cloud technologies, ETL pipelines, SQL optimization, and Python scripting . The role involves building and maintaining scalable data pipelines while leveraging AWS services to enable seamless data transformation and accessibility. Key Responsibilities: Build and optimize ETL pipelines using AWS Step Functions & AWS Lambda . Develop scalable data workflows leveraging AWS tools and SQL. Write efficient SQL queries for large-scale datasets. Collaborate with cross-functional teams to meet business data needs. Integrate APIs to enrich and expand datasets. Deploy and manage containerized applications (Docker/Kubernetes). Mandatory Skills: 2+ years of Data Engineering experience. Hands-on expertise in AWS, ETL, SQL, Python . Strong working knowledge of AWS Step Functions & AWS Lambda . Preferred Skills: AWS Glue, Git/CI-CD, REST/SOAP APIs, Docker/Kubernetes. PySpark, Snowflake, Linux.

Posted 1 day ago

Apply

5.0 - 9.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Senior Technical Lead for gaming content at Aristocrat, your role will involve leading data validation and testing for the QA strategy in data engineering. You will be responsible for establishing and leading the QA strategy for the data engineering stack, including pipelines, transformations, reporting, and data quality checks. Your key responsibilities will include: - Designing and implementing test strategies for ETL pipelines, data transformations, and BI dashboards. - Conducting manual and automated testing for data pipelines and reports. - Validating Looker dashboards and reports for data correctness, layout integrity, and performance. - Automating data validations using SQL and Python. - Owning and developing the data QA roadmap, from manual testing practices to full automation and CI/CD integration. - Maintaining test documentation, including test plans, cases, and defect logs (e.g., in Jira). What We're Looking For: - 5+ years in QA roles, with a focus on data engineering and reporting environments. - Proficiency in SQL for querying and validating large datasets, and Python. - Experience testing Looker reports and dashboards (or similar tools like Tableau/Power BI). - Strong problem-solving skills and attention to detail. - Ability to work collaboratively in a team environment and communicate effectively. - A proactive and determined approach to successfully implement QA strategies. Joining Aristocrat means being part of a team that is committed to excellence, innovation, and spreading happiness to millions around the world. Aristocrat is a world leader in gaming content and technology, and a top-tier publisher of free-to-play mobile games. The company delivers great performance for its B2B customers and brings joy to the lives of millions of people who love to play their casino and mobile games. Aristocrat focuses on responsible gameplay, company governance, employee wellbeing, and sustainability. Our Values: - All about the Player - Talent Unleashed - Collective Brilliance - Good Business Good Citizen Aristocrat offers a robust benefits package and global career opportunities. The company values diversity and encourages applications from individuals regardless of age, gender, race, ethnicity, cultural background, disability status, or LGBTQ+ identity. Aristocrat aims to create an environment where individual differences are valued, and all employees have the opportunity to realize their potential. Please note that travel expectations for this position are none. Depending on the nature of your role, you may be required to register with the Nevada Gaming Control Board (NGCB) and/or other gaming jurisdictions in which Aristocrat operates. At this time, the company is unable to sponsor work visas for this position, and candidates must be authorized to work in the job posting location on a full-time basis without the need for current or future visa sponsorship.,

Posted 2 days ago

Apply

5.0 - 7.0 years

0 Lacs

india

On-site

The Oracle Cloud Infrastructure (OCI) Security and Compliance Platform Engineering organization presents a rare opportunity to contribute to the development of next-generation, AI-driven cybersecurity solutions at cloud scale. This effort centers on ingesting and processing massive volumes of telemetry and security event data across OCI, leveraging advanced techniques including generative AI (GenAI), large language models (LLMs), and machine learning (ML) to build intelligent detection, response, and mitigation systems. The goal is to deliver autonomous, adaptive security capabilities that protect OCI, Oracle, and our global customer base against evolving threat landscapes. Inviting you to build along the high scale, low latency, distributed systems including massive data pipelines and database. Hands-on and seasoned engineer who can design and drive end to end engineering efforts (incld design, development, test infrastructure, operational excellence) Resolve complex technical issues and make design decisions to meet the critical requirements of this scalable, highly available, secure multi-tenant enablement of services in cloud. Mentor and guide junior members in the team on the technological front. Work closely withall the stakeholders including the Other technical Leads, Director, Engineering manager,architects, product, and program managers to deliver product features on time and with high quality. Proactively identify and resolve risks and issues that may dent the team's ability to execute. Work with various external (application) teams integration with the product and help guide the integration. Understand various Cloud technologies in Oracle to help evolve the cloud provisioning and enablement process on a continuous basis. Must-have Skills BS/MS degree or equivalent in related technical field involving coding or equivalent practical experience with 5+ years of overall experience Experience in building and designing microservices and/or cloud native applications. Either strong on databases front or on building big data systems (including ETL pipelines) Being aproblem solver with strong can-do attitude and ability to think on the go would be critical for success on this role. Strong fundamentals on OS, networks, distributed systems, designing fault tolerant and high available systems. Strong on at least one of the modern programming languages (Java, Kotlin, Python, C#) along with containers experiences (likes of Docker/Kubernetes).Demonstrated ability to adapt to new technologies and learn quickly. Must be detail-oriented (critical and considerate eye for detail), task-driven and have excellent communication skills. Be organized and goal-focused, ability to deliver in a fast-paced environment with minimal supervision. Strong, creative problem-solving skills and ability to abstract and share details to create meaningful articulation. Preferred Skills or Nice-to-have Skills Experience with Architectural patterns for High Availability, Performance, Scale Out architecture, Disaster Recovery, Security Architecture Knowledge of cloud-based architectures, deployment and operational aspects of cloud set up is a plus. Exposure to at least 1 cloud service provider (AWS/OCI/Azure/GCP etc.) would be a good advantage Experience in implementing container monitoring tools like Prometheus/Grafana, CI/CD pipelines (Jenkins, GitLab etc.), using/creating build tools (Gradle, Ant, Maven, or similar) Career Level - IC3

Posted 3 days ago

Apply

6.0 - 12.0 years

12 Lacs

hyderabad, telangana, india

Remote

Essential Duties and Responsibilities (include the following, and other duties may be assigned): Data Modeling: ? Strong foundation in data modeling, and reporting to ensure our data infrastructure supports efficient analytics and reporting operations. ? Collaborate with stakeholders and cross functional teams to understand business requirements and define data structures and relationships. ? Design, develop and maintain robust, and scalable data models and schemas to support analytics and reporting requirements. Data Integration: ? Integrate data from different sources, both internal and external, to create a unified and comprehensive view of the data. ? Work closely with cross-functional teams to understand data requirements and ensure successful integration. ETL Development and Data Integrity: ? Develop, optimize, and maintain ETL/ELT processes to extract, load and transform data from various sources into our Snowflake data platform. ? Implement data quality checks and validation routines within data pipelines to ensure the accuracy, consistency, and completeness of data ? Interact and coordinate work with other technical and testing members in the team. ? Review and write code that meets set quality gates. Performance Tuning: ? Optimize data infrastructure and enhance overall system performance. ? Optimize data pipelines and data processing workflows for performance, scalability, and efficiency. ? Optimize design efficiency to minimize data refresh lags, improve performance and enable data as a service through reusable assets. Technical Leadership: ? Drive the design, code, and maintenance of data engineering standards. ? Troubleshoot and resolve issues related to data processing and storage. ? Coordinate with team members in finding root-cause of problems and issue resolution. ? Perform quality checks and user acceptance testing. Cross-Functional Team Collaboration: ? Meet with internal team to clarify and document reporting requirements. ? Collaborate to understand existing issues and new requirements. Documentation: ? Create, update, maintain technical documentation of the processes and configuration. ? Develop and maintain documentation for data processes, pipelines, and models. Experience Requirements: ? Minimum 6 years of experience in a technical role in data extracts, analysis, and reporting. ? 6+ years of advanced SQL development experience coupled with robust Python or PySpark programming capabilities. Extensive experience building and optimizing ETL pipelines, data modeling and schema design. ? 4+ years of hands-on experience with cloud data warehousing technologies (Snowflake is preferred, BigQuery, AWS, Azure, SAP BW). ? Experience with SAP ERP systems OTC, Finance, Master Data is preferred ? Certification in relevant areas (e.g., Snowflake, AWS Certified Data Analytics, Google Cloud Professional Data Engineer) is preferred ? Strong dedication to code quality, automation and operational excellence using CI/CD pipelines and unit/integration tests. Have experience with GitHUB, Jenkins or Selenium. ? Proven Experience with reporting and analytical tools like Power BI, Tableau, etc. ? Familiarity with Streaming using Spark or Flink or Kafka or NoSQL or relevant technologies ? Continuously stay up-to-date on industry trends and advancements in data engineering and analytics. ? Able to work with little supervision while being accountable for individual and departmental results. ? Able to multi-task and meet deadlines under pressure. ? Solid understanding of data security and compliance requirements. ? Effective communication skills, conveying complex technical concepts to non-technical stakeholders. Competencies ? To perform the job successfully, an individual should demonstrate the following competencies: ? Technical Expertise: Maintains current technical knowledge and best practices and actively builds new skills. Provides ideas to help solve business and technical problems for the organization through technical expertise. Able to work across multiple platforms and applications and see interconnections with some guidance. ? Action Orientation: Uses time effectively to complete tasks in a thorough and timely manner. Focuses on most important items first. Maintains high levels of personal organization and works at high level of efficiency and rapid pace. Seeks out challenges and initiates action on issues even when scope is unclear. Maintains composure in times of stress. ? Collaboration: Understands the importance of relationships to ensure team success. Builds positive relationships, uses tact in sensitive situations; listens well to various points of view; relates well to others at all levels. Speaks and writes in a clear and concise manner. Takes time to get to know others outside of his/her immediate functional area. Aligns personal goals to team and organizational goals. ? Communication: Establishes rapport and is straightforward and approachable. Listens carefully, asks pertinent questions, responds effectively and adapts personal style to suit the audience. Actively gathers customer viewpoint and feedback to aid in decision-making. Speaks, writes and presents in a clear and concise manner. Uses effective meeting facilitation tools and techniques to attain meeting objectives in a timely manner. Establishes and maintains positive long-term relationships with a diverse network of contacts, including internal and external customers. ? Customer Engagement: Engaging external customers and internal resources to achieve mutually beneficial outcomes in a way that provides an optimal experience for the customer.

Posted 3 days ago

Apply

8.0 - 10.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Join us as we work to create a thriving ecosystem that delivers accessible, high-quality, and sustainable healthcare for all. athenahealth is a progressive & innovative U.S. health-tech leader , delivering cloud-based solutions that improve clinical and financial performance across the care continuum. Our modern, open ecosystem connects care teams and delivers actionable insights that drive better outcomes. Acquired by Bain Capital in a $17B deal , we're growing fast and investing in bold, strategic product innovations. We foster a values-driven culture focused on flexibility, collaboration, and work-life balance . Headquartered in Boston , we have offices in Atlanta , Austin , Belfast , Burlington , and in India: Bangalore , Pune , and Chennai . Position Summary : We are seeking a Product Analytics Manager (People Manager) to join our team and drive measurable improvements across athenahealth's Revenue Cycle Product & Operations division in Chennai . In this role, you will collaborate with leaders to enable data-driven decision-making at both strategic and tactical levels. You will be responsible for aligning analytical resources-including your own bandwidth-with the highest-priority deliverables. You and your team will play a critical role in shaping and evaluating impact by building strong stakeholder relationships, delivering high-quality data, applying advanced analytics, and generating actionable insights. About you: You are a strategic problem-solver with a passion for turning data into action-and for developing others to do the same. You excel at aligning cross-functional teams around complex challenges and using data to uncover root causes, evaluate trade-offs, and prioritize high-impact solutions. As a people manager, you invest in the growth and development of your team by fostering analytical rigor, mentoring technical skillsets, and building a strong sense of ownership and accountability. You are highly proficient in working with large, complex datasets and bring expertise in SQL and other query languages, ETL pipelines, data modeling, and data visualization. With a sharp eye for detail and a strong analytical mindset, you thrive in ambiguity and are energized by solving novel problems and enabling innovation through data. The Team: We are a diverse and collaborative group of athenistas who believe that data, used wisely, drives better products and greater value for our clients. We don't just analyze data-we tell impactful stories grounded in customer context and aligned with athenahealth's vision. Our culture thrives on curiosity, support, and continuous learning. With a wide range of technical and analytical skills, we challenge each other to grow and deliver our best work. We're excited about the future of AI and committed to using it to shape smarter, more human-centered healthcare. Job Responsibilities: Build strong, trust-based partnerships with Product Line leaders, analytics peers, and executive stakeholders to align on goals and drive shared outcomes. Deliver scalable, production-grade analytics solutions-including self-serve dashboards and source-of-truth metrics-that enable data-driven decision-making across the organization. Own the execution of top-priority analytics projects, primarily focused on strategic initiatives within your domain. Cultivate a high-performing team culture rooted in continuous learning, mentorship, and career growth-both within your team and the broader AHI analytics community. Drive prioritization and resource alignment across cross-functional teams to ensure analytical efforts are focused on the highest-value product and operational opportunities. Translate complex data into strategic insights that inform product decisions, optimize operational workflows, and guide long-term roadmap planning. Champion the evolution of the analytics tech stack by partnering with analytics leadership to embed AI capabilities and accelerate the team's transition to an AI-powered future. Typical Qualifications Minimum 8+ years of professional experience in data analytics, including at least 2-3 years in people management or leadership roles. Bachelor's degree required a degree in a quantitative field is preferred. Solid technical expertise in database technologies and hands-on experience in data querying and analysis. Strong ability to synthesize complex data and business challenges into clear, strategic solutions. Proven experience influencing cross-functional teams and communicating technical concepts to diverse audiences. Demonstrated people management skills with a focus on developing and leading high-performing analytics teams. -

Posted 3 days ago

Apply

3.0 - 5.0 years

15 - 25 Lacs

gurugram

Remote

Role & responsibilities Design, develop, and optimize data pipelines using PySpark and AWS services. Implement and manage data workflows, ETL processes, and schema validations. Ensure data quality, integrity, and consistency by applying validation frameworks (e.g., Deequ, Great Expectations). Work with data in different zones (raw, staging, curated) and implement partitioning, schema evolution, and governance best practices. Collaborate with analysts, data scientists, and cross-functional teams to deliver reliable and consumable datasets. Support data lineage, metadata management, and compliance requirements. Strong hands-on experience with PySpark & Python for ETL, data transformation, and automation. Proficiency in SQL (joins, window functions, aggregations). Experience with AWS data stack (S3, Glue, Athena, Redshift, EMR, or similar). Knowledge of data quality frameworks (Deequ, Great Expectations) and data governance principles. Good understanding of data modeling, partitioning strategies, and schema enforcement. Preferred candidate profile Exposure to workflow orchestration tools (Airflow, Step Functions, or similar). Experience with BI/reporting integrations (QuickSight, Superset, or Metabase). Familiarity with real-time data ingestion (Kafka, Kinesis, MSK).

Posted 3 days ago

Apply

10.0 - 16.0 years

20 - 35 Lacs

noida, gurugram, delhi / ncr

Hybrid

Role & responsibilities Skill - Data Engineer- Python ,AWS ,glue, Lamba ,API Exp - 10+ Yrs. Location - Gurugram Notice period - Immediate Preferred candidate profile PFB JD We are seeking an experienced Lead Data Engineer with strong expertise in Python, AWS cloud services, ETL pipelines, and system integrations. The ideal candidate will lead the design, development, and optimization of scalable data solutions and ensure seamless API and data integrations across systems. You will collaborate with cross-functional teams to implement robust DataOps and CI/CD pipelines. Key Responsibilities: Responsible for implementation of scalable, secure, and high-performance data pipelines. Design and develop ETL processes using AWS services (Lambda, S3, Glue, Step Functions, etc.). Own and enhance API design and integrations for internal and external data systems. Work closely with data scientists, analysts, and software engineers to understand data needs and deliver solutions. Drive DataOps practices for automation, monitoring, logging, testing, and continuous deployment. Develop CI/CD pipelines for automated deployment of data solutions. Conduct code reviews and mentor junior engineers in best practices for data engineering and cloud development. Ensure compliance with data governance, security, and privacy policies. Required Skills & Experience: 10+ years of experience in data engineering, software development, or related fields. Strong programming skills in Python for building robust data applications. Expert knowledge of AWS services, particularly Lambda, S3, Glue, CloudWatch, and Step Functions. Proven experience designing and managing ETL pipelines for large-scale data processing. Experience with API design, RESTful services, and API integration workflows. Deep understanding of DataOps practices and principles. Hands-on experience implementing CI/CD pipelines (e.g., using CodePipeline, Jenkins, GitHub Actions). Familiarity with containerization tools like Docker and orchestration tools like ECS/EKS (optional but preferred). Strong understanding of data modeling, data warehousing concepts, and performance optimization.

Posted 3 days ago

Apply

6.0 - 11.0 years

20 - 30 Lacs

gurugram

Work from Office

Job Application Link: https://app.fabrichq.ai/jobs/e1003d62-f76d-4ee1-b787-da40ce6f717f Job Summary: You will act as a key member of the Data consulting team, working directly with partners and senior stakeholders. You will design and implement big data and analytics solutions. Communication and organisational skills are key for this position. Key Responsibilities Develop data solutions within a Big Data Azure and/or other cloud environments Working with divergent data sets that meet the requirements of the Data Science and Data Analytics teams Build and design Data Architectures using Azure Data factory, Databricks, Data lake, Synapse Liaising with CTO, Product Owners and other Operations teams to deliver engineering roadmaps Perform data mapping activities to describe source data, target data and the high-level or detailed transformations Assist Data Analyst team in developing KPIs and reporting in tools viz. Power BI, Tableau Data Integration, Transformation, Modelling Maintaining all relevant documentation and knowledge bases Research and suggest new database products, services and protocols Skills & Requirements Must Have Skills Technical expertise with emerging Big Data technologies, such as: Python, Spark, Git, SQL Experience with cloud, container and micro service infrastructures Experience working with divergent data sets that meet the requirements of the Data Science and Data Analytics teams Hands-on experience with data modelling, query techniques and complexity analysis

Posted 4 days ago

Apply

2.0 - 4.0 years

4 - 9 Lacs

gurugram

Work from Office

Job Application Link: https://app.fabrichq.ai/jobs/b41f52d2-09e3-4f5e-9bf2-c587f6a0551f Job Summary: Junior Data Engineer role at Aays Analytics focused on designing and implementing data and analytics solutions on Google Cloud Platform (GCP). The position involves working with clients to reinvent their corporate finance functions through advanced analytics. Key responsibilities include architecture design, data modeling, and mentoring teams on GCP-based solutions. Key Responsibilities Design and drive end-to-end data and analytics solution architecture from concept to delivery on Google Cloud Platform (GCP) Design, develop, and support conceptual, logical, and physical data models for advanced analytics and ML-driven solutions Ensure integration of industry-accepted data architecture principles, standards, guidelines, and concepts Drive the design, sizing, provisioning, and setup of GCP environments and related services Provide mentoring and guidance on GCP-based data architecture to engineering, analytics, and business teams Review solution requirements and architecture for appropriate technology selection and integration Advise on emerging GCP trends and services, and recommend adoption strategies Participate in pre-sales engagements, PoCs, and contribute to thought leadership content Collaborate with founders and leadership team on cloud and data strategy Skills & Requirements Must Have Skills Azure Cloud Knowledge Data modeling techniques (Relational or Star or Snowflake or DataVault) Data engineering and ETL pipelines SQL and Python programming

Posted 4 days ago

Apply

3.0 - 5.0 years

0 Lacs

india

On-site

DESCRIPTION Are you interested in building high-performance, globally scalable Financial systems that support Amazon's current and future growth Are you seeking an environment where you can drive innovation Does the prospect of working with top engineering talent get you charged up If so, Amazon Finance Technology (FinTech) is for you.We have a team culture that encourages innovation and we expect developers to take a high level of ownership for the product vision, technical architecture,build a scalable,service-oriented platform and continuously innovate on behalf of our customers. FinTech systems process large scale data sets eliminating several thousand hours of manual work for global Accounting and Finance teams. Our systems leverage the latest technologies from the AWS stack providing engineers an amazing opportunity to learn and grow. We are looking for a highly motivated and passionate Data Engineer who is responsible for designing, developing, testing, and deploying Financial Close Systems processes.In this role you will collaborate with business users, work backwards from customers, identify problems, propose innovative solutions, relentlessly raise standards, and have a positive impact on optimizing close process performance.You will be using the best of available tools, including S3,RedShift,Glue,Athena,DynamoDB,Spark,QuickSight and Lake Formation to develop optimized data models, ETL/ELT processes, data transformations, and data warehouse to ensure high-quality, well-structured data.You will enforce rigorous data governance, security, and compliance standards for our data, including data validation, cleansing, and lineage tracking.You will be responsible for the full software development life cycle to build scalable application and deploy in AWS Cloud. BASIC QUALIFICATIONS - 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 4 days ago

Apply

1.0 - 5.0 years

0 - 0 Lacs

kolkata, west bengal

On-site

Role Overview: At WRD, we are redefining the world of IT consulting by merging human expertise with cutting-edge AI technology. Join us to work on the forefront of AI-powered innovation, where your skills drive real impact, creativity meets advanced technology, and each project sets a new standard in IT excellence. Key Responsibilities: - Develop & Maintain data pipelines to support business and analytical needs. - Work with structured and unstructured data to ensure high data quality and availability. - Build scalable and efficient data workflows using SQL, Python, and Java/Scala. - Manage and process large datasets on AWS EMR and other cloud platforms. - Develop and maintain batch and stream data processing jobs using tools like Spark (preferred). - Deploy, monitor, and troubleshoot workflows on orchestration platforms like Airflow (preferred). - Ensure compliance with data governance and security standards. Qualification Required: - Strong Proficiency in SQL for data extraction, transformation, and reporting. - Strong understanding of ETL pipelines. - Very good programming skills in Java/Scala and Python. - Strong problem-solving and analytical skills. - Experience with Apache Spark for large-scale data processing. Additional Details: At WRD, you will have the opportunity to work on cutting-edge data engineering challenges in a collaborative and inclusive work environment. We offer professional growth and upskilling opportunities, along with a competitive salary and benefits package.,

Posted 4 days ago

Apply

5.0 - 10.0 years

1 - 1 Lacs

bengaluru

Remote

Greetings!!! Role : Senior Data Engineer with GCP, ETL Location- 100 % Remote Exp Range- 4+ Yrs Notice Period- Immediate Joiners only. Duration- 4 months Contract Time: 11 am to 8 pm IST Must have skills: 1. GCP - GCS, PubSub, Dataflow or DataProc, Bigquery, Airflow/Composer, Python(preferred)/Java 2. ETL on GCP Cloud - Build pipelines (Python/Java) + Scripting, Best Practices, Challenges 3. Knowledge of Batch and Streaming data ingestion, build End to Data pipelines on GCP 4. Knowledge of Databases (SQL, NoSQL), On-Premise and On-Cloud, SQL vs No SQL, Types of No-SQL DB (At least 2 databases) 5. Data Warehouse concepts - Beginner to Intermediate level If you are interested , please share your resume to prachi@iitjobs.com

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

You will be responsible for testing ETL pipelines with a focus on source to target validation. Your key skills will include: - Proficiency in SSMS (SQL Server Management Studio) - Familiarity with Azure Synapse, including understanding of medallion architecture and PySpark notebooks - Strong understanding of Data Warehousing - Excellent communication skills - Extensive experience working in Agile Qualifications required for this role: - Minimum 5 years of experience as an ETL Test Engineer - Proficiency in SSMS, Azure Synapse, and testing ETL pipelines - Strong understanding of Data Warehousing concepts - Excellent communication skills - Experience working in Agile environments The company offers benefits such as health insurance, internet reimbursement, life insurance, and Provident Fund. This is a full-time position based in person at the work location.,

Posted 4 days ago

Apply

6.0 - 11.0 years

22 - 37 Lacs

gurugram, chennai, bengaluru

Work from Office

Why Choose Decision Point At Decision Point , we empower data-driven transformation by delivering innovative, scalable, and future-ready analytics solutions. With a proven track record in the CPG, Retail, and Manufacturing industries, we combine deep domain expertise with cutting-edge technologies to enable smarter decisions at every level of the enterprise. By joining our team, youll be part of a collaborative culture that values creativity, learning, and continuous improvement. We offer opportunities to work on high-impact projects with global clients, and leverage the latest cloud and data engineering tools to solve real-world business challenges. Key Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines using tools such as PySpark , SQL , Python , and DBT . Lead the data ingestion and transformation processes from multiple sources into cloud data platforms (Azure, AWS, Snowflake). Architect and implement data models , ensuring data consistency, integrity, and performance optimization. Oversee data architecture initiatives and contribute to best practices in data engineering, data quality, and governance. Implement and manage cloud services (Azure Data Factory, Azure Synapse, AWS Glue, S3, Lambda, etc.). Collaborate with cross-functional teams including data scientists, BI developers, and product owners. Provide technical leadership and mentorship to junior engineers and analysts. Monitor and troubleshoot data pipelines to ensure reliability and performance. Key Skills and Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or related field. 7+ years of experience in Data Engineering or related roles. Proficiency in Python , SQL , and PySpark . Strong experience with cloud platforms : Azure and/or AWS . Hands-on experience with ETL development , data pipeline orchestration , and automation . Solid understanding of data modeling (Dimensional, Star/Snowflake schema). Experience with Snowflake , DBT , and modern data stack tools. Familiarity with CI/CD pipelines , version control (Git), and agile methodologies. Excellent communication, problem-solving, and leadership skills.

Posted 4 days ago

Apply

3.0 - 8.0 years

5 - 15 Lacs

mumbai

Hybrid

Walk-in Drive | Senior Data Engineer | 310 Years Experience | 16th18th Sep 2025 | 10 AM–5 PM Are you ready to take the next big leap in your data engineering career? We are hiring Senior Data Engineers with 3–10 years of experience for a dynamic team that's shaping the future of data-driven solutions. Walk-in Dates: 16th, 17th, and 18th September 2025 Time: 10:00 AM – 5:00 PM Location: One International Center, Tower 2, 27th Floor, Senapati Bapat Marg, Near Prabhadevi Railway Station, Mumbai - 400013 Role: Senior Data Engineer Must-Have Skills Python (Advanced): Data processing & automation ETL Pipelines: Design, build, maintain using Talend Studio SQL & NoSQL: Complex queries and data handling AWS Glue & Cloud Migration: Cloud-first data integration Data Modeling: Logical & physical design expertise Strongly Preferred Data Deduplication & Transformation: Source-to-target mapping Real-Time & Batch Pipelines: Proven hands-on experience CI/CD on AWS: Git to AWS DevOps pipeline workflows Agile/Scrum: Efficient delivery in fast-paced environments Good-to-Know APIs: Data access and management via APIs Generative AI: Python integration for AI/ML use cases Soft Skills: Team player, adaptable, global collaboration experience Walk in with your updated resume and a valid ID. Meet our team and explore how you can become part of our innovative journey. Interested candidates can share resumes at sainath.sharma.ext@safrangroup.com

Posted 4 days ago

Apply

7.0 - 9.0 years

0 Lacs

india

On-site

DESCRIPTION We are seeking a highly skilled Data Engineer to join our FinTech ADA team, responsible for building and optimizing scalable data pipelines and platforms that power analytics, automation, and decision-making across Finance and Accounting domains. The ideal candidate will have strong expertise in AWS cloud technologies including Redshift, S3, AWS Glue, EMR, Kinesis, Firehose, Lambda, and IAM, along with hands-on experience designing secure, efficient, and resilient data architectures. You will work with large-scale structured and unstructured datasets, leveraging both relational and non-relational data stores (object storage, key-value/document databases, graph, and column-family stores) to deliver reliable, high-performance data solutions. This role requires strong problem-solving skills, attention to detail, and the ability to collaborate with cross-functional teams to translate business needs into technical data solutions. Key job responsibilities Scope - Fintech is seeking a Data Engineer to be part of Accounting and Data Analytics team. Our team builds and maintains data platform for sourcing, merging and transforming financial datasets to extract business insights, improve controllership and support financial month-end close periods. As a contributor to a crucial project, you will focus on building scalable data pipelines, optimizations of existing pipelines and operation excellence. Qualifications- . 7+ yrs experience as Data Engineer or in a similar role . Experience with data modeling, data warehousing, and building ETL pipelines . Bachelor's degree in Computer Science, Engineering, Mathematics, or a related field. . Extensive experience working with AWS with a strong understanding of Redshift, EMR, Athena, Aurora, DynamoDB, Kinesis, Lambda, S3, EC2, etc. . Experience with coding languages like Python/Java/Scala . Experience in maintaining data warehouse systems and working on large scale data transformation using EMR, Hadoop, Hive, or other Big Data technologies . Experience mentoring and managing other Data Engineers, ensuring data engineering best practices are being followed . Experience with hardware provisioning, forecasting hardware usage, and managing to a budget. . Exposure to large databases, BI applications, data quality and performance tuning BASIC QUALIFICATIONS - 5+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS - Experience mentoring team members on best practices PREFERRED QUALIFICATIONS - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience operating large data warehouses Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 4 days ago

Apply

7.0 - 8.0 years

3 - 6 Lacs

pune

Work from Office

Job Purpose This position is open with Bajaj Finance Ltd. Duties and Responsibilities Looking for a Senior SQL Developer with 78 years of deep experience in advanced database architecture, SQL and PL/SQL programming, complex query optimization, and hands-on Azure cloud data solutions. This position will involve leadership in Azure-based data pipelines, integration services, and modern data warehousing. Required Qualifications and Experience Looking for a Senior SQL Developer with 78 years of deep experience in advanced database architecture, SQL and PL/SQL programming, complex query optimization, and hands-on Azure cloud data solutions. This position will involve leadership in Azure-based data pipelines, integration services, and modern data warehousing. Required Skills 78 years of progressively responsible experience in SQL and PL/SQL development and database administration. Advanced expertise with PostgreSQL and MS SQL Server environments. Extensive experience in query optimization, indexing, partitioning, and troubleshooting performance bottlenecks. Strong proficiency in Azure Data Factory with demonstrable project experience. Hands-on experience with other Azure services: Azure SQL, Synapse Analytics, Data Lake, and related cloud data architectures. Experience in data warehousing, modeling, and enterprise ETL pipelines.

Posted 4 days ago

Apply

3.0 - 5.0 years

0 Lacs

india

On-site

DESCRIPTION Have you ever wondered how Amazon shipped your order so fast Wondered where it came from or how much it cost us To help describe some of our challenges, we created a short video about Supply Chain Optimization at Amazon - http://bit.ly/amazon-scot We are seeking a Data Engineer to join our team. Amazon has a culture of data-driven decision-making and demands business intelligence that is timely, accurate, and actionable. Your work will have an immediate influence on day-to-day decision making at Amazon.com. As an Amazon Data Engineer you will be working in one of the world's largest and most complex data warehouse environments. We maintain one of the largest data marts in Amazon as well as work on Business Intelligence reporting and dashboarding solutions that are used by thousands of users world-wide. Our team is responsible for timely delivery of mission critical analytical reports and metrics that are viewed at the highest levels in the organization. You should have deep expertise in the design, creation, management and business use of extremely large datasets. You should have excellent business and communication skills to be able to work with business owners to develop and define key business questions, and to build data sets that answer those questions. You should be expert at designing, implementing, and operating stable, scalable, low cost solutions to flow data from production systems into the data warehouse and into end-user facing applications. You should be able to work with business customers in a fast paced environment understanding the business requirements and implementing reporting solutions. Above all you should be passionate about working with huge data sets and someone who loves to bring datasets together to answer business questions and drive change. Key job responsibilities This role requires an Engineer with 4+ years experience in building data solutions, combined with both consulting and hands-on expertise. The position involves helping to build new and maintain existing data warehouse implementations, developing tools to facilitate data integration, identifying and architecting appropriate storage technologies, executing projects to deliver high-quality data pipelines on time, defining continuous improvement processes, driving technology direction, and effectively leading the data engineering team. You will work with multiple internal teams who need support in managing backend data solutions. Using your deep technical expertise, strong relationship-building skills, and documentation abilities, you will create technical content, provide consultation to customers, and gather feedback to drive the AWS analytic support offering. As the voice of the customer, you will work closely with data product managers and engineering teams to help design and deliver new features and product improvements that address critical customer challenges. A day in the life A typical day on our team involves collaborating with other engineers to deploy new data solutions through our large automated systems while providing operational support for your newly deployed software. You will seek out innovative approaches to automate fixes for operational issues, leverage AWS services to solve design problems, and engage with other internal teams to integrate your applications with theirs. You'll be part of a world-class team in an inclusive environment that maintains the entrepreneurial feel of a startup. This is an opportunity to operate and engineer systems on a massive scale and to gain top-notch experience in database storage technologies BASIC QUALIFICATIONS - 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience programming with at least one modern language such as C++, C#, Java, Python, Golang, PowerShell, Ruby - Knowledge of batch and streaming data architectures like Kafka, Kinesis, Flink, Storm, Beam - Knowledge of distributed systems as it pertains to data storage and computing PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies