Jobs
Interviews

458 Etl Pipelines Jobs - Page 6

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Data Engineer in our team, you will be responsible for assessing complex new data sources and quickly turning these into business insights. You will also support the implementation and integration of these new data sources into our Azure Data platform. You are detailed in reviewing and analyzing structured, semi-structured, and unstructured data sources for quality, completeness, and business value. Your role involves designing, architecting, implementing, and testing rapid prototypes that demonstrate the value of the data and presenting them to diverse audiences. Additionally, you will participate in early-stage design and feature definition activities. Your responsibilities include implementing a robust data pipeline using the Microsoft Databricks Stack and creating reusable and scalable data pipelines. Collaboration is key as you will work as a team player, collaborating with members across multiple engineering teams to support the integration of proven prototypes into core intelligence products. Strong communication skills are required to effectively convey complex data insights to non-technical stakeholders. In terms of skills, you should have advanced working knowledge and experience with relational and non-relational databases, API data providers, and building and optimizing Big Data pipelines, architectures, and datasets. Strong analytic skills related to working with structured and unstructured datasets are necessary. Hands-on experience in Azure Databricks utilizing Spark to develop ETL pipelines is essential. Proficiency in data analysis, manipulation, and statistical modeling using tools like Spark, Python, Scala, SQL, or similar languages is required. Experience in Azure Data Lake Storage Gen2, Azure Data Factory, Databricks, Event Hub, Azure Synapse is preferred. Familiarity with technologies such as Event Hub, Docker, Azure Kubernetes Service, Azure DWH, API Azure, Azure Function, Power BI, Azure Cognitive Services is a plus. Knowledge of Azure DevOps to deploy data pipelines through CI/CD is beneficial. For qualifications and experience, a minimum of 5-7 years of practical experience as a Data Engineer is required. A Bachelor's degree in computer science, software engineering, information technology, or a related field is preferred. Experience with the Azure cloud stack in-production is necessary for this role.,

Posted 3 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

You should have industrial experience of 6-10 years with at least 3 years of experience in building and deploying LLM-based systems. A Bachelors degree (or above) in a relevant field such as Computer Science or Statistics is strongly preferred. Additionally, you must possess 3+ years of hands-on experience in AI/ML engineering, including real-world deployment of models and pipelines. Proficiency in Python is essential for this role. Moreover, the ideal candidate will have proven experience in building and deploying LLM-based systems, especially those utilizing retrieval-augmented generation (RAG). A solid understanding of vector databases, embeddings, and semantic search architecture is required. Strong skills in data engineering, including ETL pipelines, data cleaning, transformation, and large-scale processing, are also necessary. Experience in building REST APIs, containerizing services with Docker, and deploying to cloud infrastructure (preferably Azure or AWS) is highly desirable. A strong understanding of cloud platforms and modern data architectures (e.g., AWS, GCP, Azure) will be beneficial for this position.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Factory Digital Twin Developer and Factory Data Scientist, you will play a key role in the development and implementation of Factory Digital Twins to support the Digital Governance and Excellence Team. Your primary responsibility will revolve around extracting value from various data sources through the deployment of data analytics methods, models, algorithms, and visualizations. Your ultimate goal will be to provide business-relevant insights that can enhance management reporting, transparency, and decision-making processes for sustainable growth. Your expertise will be pivotal in driving strategic initiatives that influence the planning, organization, and implementation of the product portfolio on a global scale. Your day-to-day tasks will include developing and maintaining algorithms, modules, and libraries within the Factory Digital Twin Core, based on the requirements identified from factories and the manufacturing network. You will focus on understanding, modeling, and deploying behavioral logic, constraints, and interdependencies within the factory shop floor domain. Analyzing factory data related to Products, Process, and Resources (PPR), and creating ETL Pipelines for standardized input in Factory Digital Twin Simulation will also be a crucial part of your role. Furthermore, you will conduct simulation analytics, derive recommendations, optimize factories, and develop AI algorithms for analytics, explanation, and improvement scenarios. To excel in this role, you are required to have a successful university degree in Engineering, Automation, Computer Science, Mathematics, Physics, or similar fields. A deep understanding and experience in factory domains, production processes, and manufacturing technologies, particularly in factory planning, are essential. Proficiency in modeling, Discrete Event Simulation (DES), Tecnomatix Plant Simulation, and 3D modeling in CAD environments like NX and AutoCAD will be advantageous. Your skill set should also include experience in developing ETL Pipelines, creating dashboards for data analytics, programming in languages such as Python and C++, and working with databases, ontologies, and queries like RDF, SPARQL, SHACL, and SQL. Additionally, expertise in AI algorithm development from analytics to optimization is crucial. Fluency in English and a willingness to travel globally are preferred qualifications for this role.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

As a member of our team, you will collaborate closely with product managers, engineers, and business stakeholders to establish key performance indicators (KPIs) and success metrics for Creator Success. Your responsibilities will include developing comprehensive dashboards and self-service analytics tools utilizing platforms such as QuickSight, Tableau, or similar BI tools. You will conduct in-depth analysis of customer behavior, content performance, and livestream engagement patterns to derive valuable insights. Furthermore, you will be tasked with designing, constructing, and maintaining robust ETL/ELT pipelines to manage large volumes of streaming and batch data originating from the Creator Success platform. Your expertise will be crucial in the development and optimization of data warehouses, data lakes, and real-time analytics systems leveraging AWS services like Redshift, S3, Kinesis, EMR, and Glue. It will also be essential to implement data quality frameworks and monitoring systems to uphold data accuracy and reliability, while establishing automated data validation and alerting mechanisms for critical business metrics. Your role will involve extracting actionable insights from complex datasets to steer product roadmap and business strategy. The ideal candidate holds a Bachelor's degree in Computer Science, Engineering, Mathematics, Statistics, or a related quantitative field, accompanied by a minimum of 3 years of experience in business intelligence/analytics roles. Proficiency in SQL, Python, and/or Scala is required, along with a strong background in AWS cloud services such as Redshift, S3, EMR, Glue, Lambda, and Kinesis. Additionally, you should possess expertise in constructing and optimizing ETL pipelines, data warehousing solutions, big data technologies like Spark and Hadoop, and distributed computing frameworks. Familiarity with business intelligence tools like QuickSight, Tableau, Looker, and best practices in data visualization is crucial. A collaborative mindset when working with cross-functional teams, including product, engineering, and business units, is imperative, along with a customer-centric focus geared towards delivering high-quality, actionable insights. Key Skills: - Proficiency in SQL and Python - Expertise in building and optimizing ETL pipelines and data warehousing solutions - Familiarity with business intelligence tools (QuickSight, Tableau, Looker) and data visualization best practices - Experience collaborating with cross-functional teams, including product, engineering, and business units - Proficiency in AWS cloud services (Redshift, S3, EMR) If you are excited about this opportunity, kindly share your updated resume with us at aathilakshmi@buconsultants.co.,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

Founded in 2015, Netradyne is a technology company that specializes in Artificial Intelligence, Deep Learning, and Edge Computing to provide innovative solutions to the transportation industry. With technology already implemented in thousands of vehicles ranging from passenger cars to semi-trailers, Netradyne aims to enhance safety on interstates, suburban roads, rural highways, and even off-road terrains. Netradyne is currently seeking skilled engineers to join the Analytics team, which consists of talented graduates with a strong educational background. The team focuses on developing cutting-edge AI solutions to identify unsafe driving situations in real-time, ultimately preventing accidents and reducing fatalities and injuries. As a part of the Analytics team, you will work alongside machine learning engineers and data scientists to create and deploy scalable solutions utilizing GenAI, Traditional ML models, Data science, and ETL pipelines. Your responsibilities will include designing, developing, and implementing production-ready solutions, collaborating with cross-functional teams to integrate AI-driven solutions, building automation frameworks, and staying abreast of advancements in generative AI technologies. Key Responsibilities: - Design, develop, and deploy production-ready scalable solutions leveraging GenAI, Traditional ML models, Data science, and ETL pipelines. - Collaborate with cross-functional teams to integrate AI-driven solutions into business operations. - Build and enhance frameworks for automation, data processing, and model deployment. - Utilize Gen-AI tools and workflows to enhance the efficiency and effectiveness of AI solutions. - Stay updated with the latest advancements in generative AI and related technologies. - Deliver key product features within cloud analytics. Requirements: - Hold a degree in Tech, M. Tech, or PhD in Computer Science, Data Science, Electrical Engineering, Statistics, Maths, Operations Research, or a related domain. - Proficient in Python and SQL with a strong foundation in computer science principles, algorithms, data structures, and OOP. - Experience in developing end-to-end solutions on AWS cloud infrastructure. - Familiarity with internals and schema design for various data stores (RDBMS, Vector databases, and NoSQL). - Experience with Gen-AI tools, workflows, and large language models (LLMs). - Proficient in cloud platforms and deploying models at scale. - Strong analytical and problem-solving skills with attention to detail. - Sound knowledge of statistics, probability, and estimation theory. Desired Skills: - Familiarity with PyTorch, TensorFlow, and Hugging Face frameworks. - Experience with data visualization tools such as Tableau, Graphana, and Plotly-Dash. - Exposure to AWS services like Kinesis, SQS, EKS, ASG, Lambda, etc. - Expertise in at least one popular Python web framework (e.g., FastAPI, Django, or Flask). - Exposure to quick prototyping using Streamlit, Gradio, Dash, etc. - Familiarity with Big Data processing tools such as Snowflake, Redshift, HDFS, EMR.,

Posted 3 weeks ago

Apply

9.0 - 14.0 years

19 - 22 Lacs

bengaluru

Work from Office

Designing and developing custom SharePoint solutions based on business requirements. Creating and configuring SharePoint sites, lists, libraries, workflows, and web parts. Developing custom forms and templates using InfoPath and SharePoint Designer. Providing technical support for SharePoint users. Developing and implementing SharePoint security and access controls. Monitoring SharePoint performance and troubleshooting issues. Collaborating with other developers and stakeholders to ensure solutions meet business needs. Excellent knowledge of understanding organization structures and business operations and accordingly designing a SharePoint implementation Experience on SharePoint Online Experience with Document Management Portals as well as Intranet. Ensure security by design to uphold building automated testing into the build and release pipelines. Creation of re-usable design patterns, and guard rails for citizen developers Identify, analyse, and develop points of integration with other solutions to enable better use of data. Develop integrations using APIs to improve ETL pipeline mechanism. Work closely with the business and users to gather requirements, provide status updates, and build relationships. Skills : - SharePoint SME, Exchange Online, O365, Exchange Server, Onedrive, On premise, teams, SharePoint solutions, business requirements, SharePoint sites, lists, libraries, workflows, web parts, custom forms, templates, InfoPath, SharePoint Designer, technical support, SharePoint security, access controls, performance monitoring, troubleshooting, developer collaboration, organizational structures, business operations, SharePoint Online, document management portals, intranet, security by design, automated testing, build and release pipelines, reusable design patterns, guard rails, citizen developers, data integration, API development, ETL pipeline, business requirements gathering, status updates, stakeholder relationships. Mandatory Key Skills SharePoint,Exchange Online,O365,Exchange Server,Onedrive,On premise,SharePoint solutions,data integration,API development,ETL pipeline,SharePoint SME*

Posted 3 weeks ago

Apply

8.0 - 10.0 years

1 - 1 Lacs

hyderabad, chennai, bengaluru

Work from Office

8+ years of experience in data architecture and design. Strong hands-on experience with Azure Data Services, Databricks, and ADF. Proven experience in insurance data domains and product-oriented data design. Excellent communication and stakeholder management skills. Experience with data governance and metadata management tools

Posted 3 weeks ago

Apply

6.0 - 8.0 years

19 - 20 Lacs

hyderabad

Work from Office

Stream Sets Environment Management: setup, manage security, users etc., Deployment Automation: Manage CI/CD pipelines via Jenkins. Autoscaling: managing Stream Sets pipelines Monitoring and Logging: monitoring environments, pipelines.

Posted 3 weeks ago

Apply

7.0 - 12.0 years

25 - 30 Lacs

hyderabad, pune

Work from Office

Role Overview As a Senior ML Engineer / Data Engineer, you will architect, build, and scale end-to-end data and ML solutions that power critical business use cases. You will collaborate with cross-functional teams to deploy production-grade machine learning models, design robust data pipelines, and ensure the integrity and availability of our data assets. Key Responsibilities ML Model Deployment & Operations Productionize, monitor, and maintain machine learning models in a scalable and secure environment Implement CI/CD pipelines for model versioning, testing, and roll-outs Perform model performance tuning and continual retraining strategies Data Pipeline Development Design, build, and optimize scalable ETL/ELT pipelines using PySpark, Python, and SQL Integrate and stream data with Apache Kafka for real-time processing scenarios Orchestrate workflows with Airflow (or equivalent orchestration tools) Data Architecture & Governance Collaborate on the design of the enterprise data warehouse and data lake architectures Define data modeling standards and maintain data catalogs Ensure data quality, lineage, and compliance with governance policies Containerization & Infrastructure Containerize applications and services using Docker (and Kubernetes familiarity is a plus) Work closely with DevOps to automate deployments and manage infrastructure as code Cross-functional Collaboration Partner with Data Scientists, BI teams, and Product Owners to translate analytics requirements into technical solutions Mentor junior engineers and conduct code reviews to uphold engineering best practices Required Qualifications Experience: 8+ years in Data Engineering and/or Machine Learning Engineering roles Languages & Tools: Python (expert-level proficiency) SQL (advanced querying, performance tuning) PySpark (building distributed data processing jobs) Apache Kafka (pub/sub, stream processing) Containerization: Solid hands-on experience with Docker; familiarity with Kubernetes beneficial ETL & Data Warehousing: Proven track record designing and operating ETL pipelines; experience with commercial or open-source DW platforms (e.g., Snowflake, Redshift, Hive) Orchestration: Experience with workflow orchestration tools (Apache Airflow or equivalent) Soft Skills: Strong problem-solving, communication, and mentoring abilities Preferred Qualifications Hands-on experience with ML frameworks (TensorFlow, PyTorch, Scikit-Learn) Exposure to cloud platforms (AWS, GCP, or Azure) and their managed data/ML services Knowledge of infrastructure-as-code tools (Terraform, CloudFormation) Familiarity with monitoring/logging stacks (Prometheus, ELK) Publications or contributions to open-source projects in data engineering or ML

Posted 3 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

india

On-site

DESCRIPTION The Amazon Delivery and Shopping Experience Product Team is looking for talented Business Intelligence Engineers (BIE) who develop solutions to better optimize customer experience and shopping experience by disproportionately improving discoverability and speed. As part of Amazon's DEX product team, our team members have the opportunity to be at the forefront of DEX product thought leadership - working on some of the most difficult Customer Experience problems while partnering with research scientists, software developers, and business leaders. With the focus on driving real impact on Amazon's long-term profitability, we build new analysis from ground up, proposing new concepts and technology to meet business needs. We need strong BIEs who enjoy and excel at diving into data, analyzing root causes, defining long-term solutions and driving their implementation with stakeholders. We are NOT a reporting team and we are not looking for reporting analysts. We are looking for people with a flair for recognizing trends and patterns while correlating it to the business problem at hand. If you have an uncanny ability to decipher the exact policy/mechanism/solution to address the challenge and ability to influence people using hard data (and some tact) then we are looking for you! Key job responsibilities As a BIE within the group, you will work closely with program managers, software developers, research scientists, and product managers to analyze massive data sets, identify areas to improve, define metrics to measure and monitor programs, and most importantly work with different stakeholders to drive improvements over time. You will also work closely with internal business teams to extract or mine information from our existing systems to create new analysis, and expose data from our group to wider teams in intuitive ways. As a BIE embedded in the product team you have the opportunity to participate and shape product development as well as leverage your technical skills. This position provides opportunities to influence high visibility/high impact areas in the organization. This position requires superior analytical thinkers, able to quickly approach large ambiguous problems and apply your technical and engineering expertise to rapidly prototype and deliver solutions. This skill also requires you to work across a variety of teams, including software tech teams, operations, finance, category, search, and inventory platform (software) teams. Successful candidates must thrive in fast-paced environments which encourage collaborative and creative problem solving, be able to measure and estimate risks, constructively critique peer research, extract and manipulate data across various data marts, and align research focuses on Amazon's strategic needs. A day in the life . Perform complex data analysis (correlations, regressions, simulations, optimization, etc.) and identify improvement opportunities on key customer experience metrics (sales, speed perception, Prime benefits, reduce ship costs, etc.). . Own end-to-end program management. You will not only develop insights but devise realistic plans and work with key stakeholders (software managers, product managers, retail business managers, etc.) to execute them. . Responsible for adding YY million $ to top line or reducing XX million $ actual savings to bottom line. You will be responsible for not only recommendation but actual improvements. . Engage with senior leadership on drafting proposals for new concepts, improvement plans, 3-year strategy and outlook. . Be a subject matter expert in best in class customer experience policies at Amazon. You will understand intricate details of our promise and ordering systems along with their impact on ground-level operations . Design, develop and establish KPIs to monitor analysis and provide strategic insights to drive growth and performance. . Build infrastructure and implement maintenance strategy for internal datasets to support swift analysis to answer critical business questions. You . Work with large data sets, automate data extraction, and build monitoring/reporting dashboards and high value, self-service automated BI solutions. . Responsible for up skilling team data management and analytical capabilities by mentoring program managers, junior/peer analysts, and senior managers. About the team As Amazon India's DEX product team, we are at the forefront of the Customer Experience product thought leadership - working on some of the most difficult Customer Experience problems while partnering with research scientists, software developers, and business leaders. With the focus on driving real impact on Amazon's long-term profitability, we build new analysis from ground up, proposing new concepts and technology to meet business needs, and translating these concepts into robust scalable products that deliver tangible value to our customers and to the A.in business. BASIC QUALIFICATIONS - 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling PREFERRED QUALIFICATIONS - Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift - Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 3 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

india

On-site

DESCRIPTION The BIE will partner with the Global Finance team (based in the US and Europe) to support various automation needs for the team. This is an exciting opportunity to join a fast-paced business at Amazon. The successful candidate shoul have a demonstrated ability to effectively deliver tech solution for a business problem. The successful candidate will be comfortable working in cross-functional teams, and demonstrate strong Anaplan model building skills. The ideal candidate must have superior attention to detail and the ability to successfully manage multiple competing priorities simultaneously. The position represents an exciting opportunity to be a part of an extremely dynamic and high paced environment, supporting a global organization and offers significant opportunities for rapid growth. Key job responsibilities -Experience with Anaplan model Building to leverage this tool for Capacity Planning, building Reporting Packages and Finance Operations Volume Forecast Models. -Key job responsibilities - Owning the design, operations and improvements for the Organizations Datawarehouse infrastructure - Maintain, improve and manage all ETL pipelines and clusters - Explore and learn the latest AWS technologies to provide new capabilities and increase efficiency - Define metrics and KPIs to measure success of strategic initiatives and report on their progress - Develop relationships and processes with finance, sales, business operations, solution delivery team, partner, BD, and other cross-functional stakeholders - Help continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers. - Collaborate with data scientists, BIEs and BAs to deliver high quality data architecture and pipelines. - A day in the life Scripting language such as Python preferred - Establish scalable, efficient, automated processes for large scale data analyses - Support the development of performance dashboards that encompass key metrics to be reviewed with senior leadership and sales management - Work with business owners and partners to build data sets that answer their specific business questions - Support Financial Analysts, Sales Operations Leads and beyond in analyzing usage data to derive new insights and fuel customer success BASIC QUALIFICATIONS - 2+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with one or more industry analytics visualization tools (e.g. Excel, Tableau, QuickSight, MicroStrategy, PowerBI) and statistical methods (e.g. t-test, Chi-squared) - 2+ years of experience in Anaplan model building PREFERRED QUALIFICATIONS - Master's degree, or Advanced technical degree - Knowledge of data modeling and data pipeline design - Experience with statistical analysis, co-relation analysis Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

india

On-site

DESCRIPTION Does the prospect of dealing with massive volumes of data excite you Do you want to lead scalable data engineering solutions using AWS technologies Do you want to create the next-generation tools for intuitive data access Amazon's Finance Tech team needs a Data Engineer to shape the future of the Amazon finance data platform by working with stakeholders in North America, Asia and Europe. The team is committed to building the next generation big data platform that will be one of the world's largest finance data warehouses by volume to support Amazon's rapidly growing and dynamic businesses, and use it to deliver the BI applications which will have an immediate influence on day-to-day decision making. Members of the team will be challenged to innovate using the latest big data techniques. We are looking for a passionate data engineer to develop a robust, scalable data model and optimize the consumption of data sources required to ensure accurate and timely reporting for the Amazon businesses. You will share in the ownership of the technical vision and direction for advanced reporting and insight products. You will work with top-notch technical professionals developing complex systems at scale and with a focus on sustained operational excellence. We are looking for people who are motivated by thinking big, moving fast, and exploring business insights. If you love to implement solutions to hard problems while working hard, having fun, and making history, this may be the opportunity for you. Key job responsibilities Design, implement, and support a platform providing secured access to large datasets. Interface with tax, finance and accounting customers, gathering requirements and delivering complete BI solutions. Collaborate with Finance Analysts to recognize and help adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. Model data and metadata to support ad-hoc and pre-built reporting. Own the design, development, and maintenance of ongoing metrics, reports, analyses, dashboards, etc. to drive key business decisions. Tune application and query performance using profiling tools and SQL. Analyze and solve problems at their root, stepping back to understand the broader context. Learn and understand a broad range of Amazon's data resources and know when, how, and which to use and which not to use. Keep up to date with advances in big data technologies and run pilots to design the data architecture to scale with the increased data volume using AWS. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for datasets. Triage many possible courses of action in a high-ambiguity environment, making use of both quantitative analysis and business judgment. BASIC QUALIFICATIONS - 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

india

On-site

DESCRIPTION Amazon, Earth's most customer-centric company, offers low prices, vast selection, and convenience through its world-class e-commerce platform. The Competitive Pricing team ensures customer trust through optimal pricing across all Amazon marketplaces. Within this organization, our Data Engineering team, part of the Pricing Big Data group, builds and maintains the global pricing data platform. We enable price competitiveness by processing data from multiple sources, creating actionable pricing dashboards, providing deep-dive analytics capabilities, and driving operational efficiency. As a Data Engineer, you will collaborate with technical and business teams to develop real-time data processing solutions. You will lead the architecture, design, and development of the pricing data platform using AWS technologies and modern software development principles. Your responsibilities will include architecting and implementing automated Business Intelligence solutions, designing scalable big data and analytical capabilities, and creating actionable metrics and reports for engineers, analysts, and stakeholders. In this role, you will partner with business leaders to drive strategy and prioritize projects. You'll develop and review business cases, and lead technical implementation from design to release. Additionally, you will provide technical leadership and mentoring to the data engineering team. This position offers an opportunity to make a significant impact on Amazon's pricing strategies and contribute to the company's continued growth and evolution in the e-commerce space. Key job responsibilities - Design, implement, and maintain data infrastructure for enterprise-wide analytics - Extract, transform, and load data from multiple sources using SQL and AWS big data technologies - Build comprehensive domain knowledge of Amazon's business operations and metrics - Write clear, concise documentation and communicate effectively with stakeholders across teams - Deliver results independently while meeting deadlines - Collaborate with engineering teams to solve complex data challenges - Automate reporting processes and develop self-service analytics tools for customers - Research and implement new AWS technologies to enhance system capabilities BASIC QUALIFICATIONS - 3+ years of data engineering experience - Experience with data modeling, warehousing and building ETL pipelines - Experience with SQL PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) - Bachelor's degree Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 3 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

india

On-site

DESCRIPTION How often have you had an opportunity to be an early member of a team that is tasked with solving a huge customer need through disruptive and innovative technology Amazon Stores FinTech Team is seeking a Data Engineer to design and build a flagship multi-year project to inject automated planning and predictive forecasting, and help shape an end UI, built on the AWS Data Lake. We are the tech team that builds and supports Worldwide Amazon Stores - one of the fastest growing, largest and most complex Supply Chains in the world. This position requires a high level of technical expertise with data engineering concepts. Key job responsibilities . Expertise and experience in building and scaling Finance Planning Applications . Deliver on data architecture projects and implementation of next generation financial solutions . Manage AWS resources including EC2, RDS, Redshift, Kinesis, EMR, Lambda etc . Build and deliver high quality data architecture and pipelines to support customer reporting needs . Interface with other technology teams to extract, transform, and load data from a wide variety of data sources . Continually improve ongoing data extraction, cleansing, validation, transformation and ingestion processes, automating or simplifying self-service support for customers . Collaborate with business users, development teams, and operation engineering teams to tackle business requirements and deliver against high operational standards of system availability and reliability . Dive deep to resolve problems at their root, looking for failure patterns and suggesting fixes . Prepare run books, methods of procedures, tutorials, training videos on best practices for the team . Build monitoring dashboards and creation of critical alarms for the system . Build and enhance software to extend system, application, or tool functionality to improve business processes and meet end user needs while working within the overall system architecture . Build automated unit test and regression test framework that can be leveraged across multiple data systems . Build data reconciliation framework and tools that can be leveraged across multiple data systems . Diagnose and resolve operational issues, perform detailed root cause analysis, respond to suggestions for enhancements . Identify process improvement opportunities to drive innovation . Rotational on-call availability for critical systems support BASIC QUALIFICATIONS - 3+ years of data engineering experience - 3+ years of SQL experience - Experience with data modeling, warehousing and building ETL pipelines - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS - 5+ years of experience working in Financial Planning & Reporting domain PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) - 3+ years experience in designing and building data integration with planning technologies such as IBM Cognos Planning Analytics/TM1 Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 3 weeks ago

Apply

4.0 - 9.0 years

7 - 17 Lacs

hyderabad, bengaluru

Work from Office

About this role: Wells Fargo is seeking a Senior Software Engineer In this role, you will: Lead moderately complex initiatives and deliverables within technical domain environments Contribute to large scale planning of strategies Design, code, test, debug, and document for projects and programs associated with technology domain, including upgrades and deployments Review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures Resolve moderately complex issues and lead a team to meet existing client needs or potential new clients needs while leveraging solid understanding of the function, policies, procedures, or compliance requirements Collaborate and consult with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals Lead projects and act as an escalation point, provide guidance and direction to less experienced staff Required Qualifications: 4+ years of Software Engineering experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Proficiency in Apache Spark , Scala , SQL programming language. Hands-on experience with ETL tools/processes and data integration frameworks. Strong knowledge of SQL and relational databases (e.g., SQL server, etc.). Familiarity with Big Data technologies (e.g., Spark, Hadoop) is a plus. Experience with cloud platforms (e.g., GCP, AWS, Azure) is advantageous. Strong problem-solving skills and attention to detail. Job Expectations: Develop and maintain scalable Spark , ETL pipelines using Scala and other relevant technologies. Write efficient and optimized SQL queries for data extraction, transformation, and loading. Collaborate with cross-functional teams to understand data requirements and deliver solutions. Ensure data quality, integrity, and security across all processes. Optimize performance of data pipelines and troubleshoot issues as they arise. Document technical designs, processes, and workflows.

Posted 3 weeks ago

Apply

4.0 - 6.0 years

14 - 24 Lacs

pune

Hybrid

Exp in designing BI solutions using Power BI preferably Tableau or equivalent, for cloud data platforms, preferably Snowflake. BI tools with ELT/ETL pipelines ( DBT, Talend, Azure Data Factory) Strong SQL Required Candidate profile Candidate should have 5+ years of BI development exp(using Power BI preferably snowflake), with at least 3 years in a senior or lead role

Posted 3 weeks ago

Apply

3.0 - 8.0 years

5 - 15 Lacs

bengaluru

Work from Office

Role & responsibilities We Are Looking For Data Engg Exp: 3-9 y ears Location: Whitfield Bangalore Mandatory Skills: azure,aws Databricks Expert(Delta Lake,Lake Bridge) Python with PySpark ,Data Modeling ,Data Lake house creation,Designing ETL/ELT pipelines,Azzure Cloud Platforms proficiency, Data Quality,Hive,Spark Preferred candidate profile

Posted 3 weeks ago

Apply

6.0 - 8.0 years

0 Lacs

gurugram

Work from Office

Design and manage Azure-based data pipelines, models, and ETL processes. Ensure governance, security, performance, and real-time analytics while leveraging DevOps, CI/CD, and big data tools for reliable, scalable solutions. Health insurance Maternity policy Annual bonus Provident fund Gratuity

Posted 3 weeks ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

bengaluru

Work from Office

Seikor is hiring for Tricon Infotech Pvt. Ltd. ( https://www.triconinfotech.com/ ) We are seeking full stack python developers. We are offering INR 1500 if you clear round 1 interview and are selected for round 2 interview. Apply, Earn during the process and find your next awesome job at Tricon powered by Seikor Job Title : Python Full-stack Developer Location : Bengaluru, India Experience : 4 - 10 Years Team Size : 5001,000 employees globally Function : Software Development Job Summary: We are looking for a skilled and experienced Python Full Stack Developer with hands-on experience in AWS . The ideal candidate should have a strong foundation in backend development using Python and frameworks like Django or Flask. This role offers an exciting opportunity to work on dynamic, scalable applications in a collaborative and fast-paced environment. Key Responsibilities: Lead and mentor a team of engineers, especially data engineers Architect scalable, secure backend systems using Python, FastAPI, and AWS Drive data infrastructure decisions with PostgreSQL, Redshift , and advanced data pipelines Collaborate cross-functionally to integrate AI-first features and stay ahead of emerging AI trends Ensure delivery of high-quality, maintainable code and manage technical debt Required Skills & Qualifications: Strong leadership and communication skills Deep understanding of AWS services (EC2, Lambda, S3, IAM, Redshift) Advanced proficiency in Python and FastAPI Expertise in relational databases ( PostgreSQL ) and data warehousing ( Redshift ) Proven experience in ETL pipelines, data modeling, and optimization Ability to thrive in fast-paced, iterative environments Nice to Have: Experience with AI/ML pipelines or data science platforms Familiarity with Airflow or similar orchestration tools Exposure to DevOps practices and CI/CD pipelines Soft Skills: Engineer-first mindset Team-oriented culture Growth mindset Strong problem-solving skills Educational Qualification: Bachelors or Masters degree in Computer Science, Engineering, or related field

Posted 3 weeks ago

Apply

12.0 - 20.0 years

35 - 40 Lacs

noida, uttarpradesh

Work from Office

The Team: Are you ready to dive into the world of data and uncover insights that shape global commodity markets? We're looking for a passionate BI Developer to join our Business Intelligence team within the Commodity Insights division at S&P Global. At S&P Global, we are on a mission to harness the power of data to unlock insights that propel our business forward. We believe in innovation, collaboration, and the relentless pursuit of excellence. Join our dynamic team and be a part of a culture that celebrates creativity and encourages you to push the boundaries of whats possible. Key Responsibilities Unlocking the Power of Data Collaborate on the end-to-end data journey, helping collect, cleanse, and transform diverse data sources into actionable insights that shape business strategies for functional leaders. Work alongside senior BI professionals to build powerful ETL processes, ensuring data quality, consistency, and accessibility. Crafting Visual Storytelling Develop eye-catching, impactful dashboards and reports that tell the story of commodity trends, prices, and global market dynamics. Bring data to life for stakeholders across the company, including executive teams, analysts, and developers, by helping to create visually compelling and interactive reporting tools. Mentor and train users on dashboard usage for efficient utilization of insights. Becoming a Data Detective Dive deep into commodities data to uncover trends, patterns, and hidden insights that influence critical decisions in real-time. Demonstrate strong analytical skills to swiftly grasp business needs and translate them into actionable insights. Collaborate with stakeholders to define key metrics and KPIs and contribute to data-driven decisions that impact the organizations direction. Engaging with Strategic Minds Work together with cross-functional teams within business operations to turn complex business challenges into innovative data solutions. Gather, refine, and translate business requirements into insightful reports and dashboards that push our BI team to new heights. Provide ongoing support to cross-functional teams, addressing issues and adapting to changing business processes. Basic Qualifications : 12+ years of professional experience in BI projects, focusing on dashboard development using Power BI or similar tools and deploying them on their respective online platforms for easy access. Proficiency in working with various databases such as Redshift, Oracle, and Databricks , using SQL for data manipulation, and implementing ETL processes for BI dashboards . Ability to identify meaningful patterns and trends in data to provide valuable insights for business decision-making. Knowledge of Generative AI, Microsoft Copilot, and Microsoft Fabric a plus. Skilled in requirement gathering and developing BI solutions. Candidates with a strong background/proficiency in Power BI and Power Platforms tools such as Power Automate/Apps , and intermediate to advanced proficiency in Python are preferred. Essential understanding of data modeling techniques tailored to problem statements. Familiarity with cloud platforms (e.g., Azure, AWS) and data warehousing. Exposure to GenAI concepts and tools such as ChatGPT. Experience with to Agile Project Implementation methods. Excellent written and verbal communication skills. Must be able to self-start and succeed in a fast-paced environment. Ability to write complex SQL queries or enhance the performance of existing ETL pipelines is a must. Familiarity with Azure Devops will be an added advantage. Candidates with a strong background/proficiency in Power BI and Power Platforms tools such as Power Automate/Apps, and intermediate to advanced proficiency in Python are preferred.

Posted 3 weeks ago

Apply

1.0 - 3.0 years

10 - 20 Lacs

bengaluru

Work from Office

you should apply if you: having 1-3 years of experience in ETL and data engineering are able to read and write complex SQL have prior experience in at least one programming language are aware of your way around data modeling, data warehousing and lakehouse (we use redshift and databricks) come with experience in working on cloud services, preferably AWS constantly learning and looking for ways to improve yourself and the processes around you can be a team player with strong analytical, communication, and troubleshooting skills. how is life at CRED? working at CRED would instantly make you realize one thing: you are working with the best talent around you. not just in the role you occupy, but everywhere you go. talk to someone around you; most likely you will be talking to a singer, standup comic, artist, writer, an athlete, maybe a magician. at CRED people always have talent up their sleeves. with the right company, even conversations can be rejuvenating. at CRED, we guarantee a good company. hard truths: pushing oneself comes with the role. and we realise pushing oneself is hard work. which is why CRED is in the continuous process of building an environment that helps the team rejuvenate oneself: included but not limited to a stacked, in-house pantry, with lunch and dinner provided for all the team members, paid sick leaves and a comprehensive health insurance. to make things smoother and to make sure you spend time and energy only on the most important things, CRED strives to make every process transparent: there are no work timings because we do not believe in archaic methods of calculating productivity, your work should speak for you. there are no job designations because you will be expected to hold down roles that cannot be described in one word. since trust is a major virtue in the community we have built, we make it a point to highlight it in the community behind CRED: all our employees get their salaries before their joining date. a show of trust that speaks volumes because of the skin in the game. there are many more such eccentricities that make CRED what it is but thats for one to discover. if you feel at home reading this, get in touch.

Posted 3 weeks ago

Apply

1.0 - 3.0 years

10 - 20 Lacs

bengaluru

Work from Office

you should apply if you: having 1-3 years of experience in ETL and data engineering are able to read and write complex SQL have prior experience in at least one programming language are aware of your way around data modeling, data warehousing and lakehouse (we use redshift and databricks) come with experience in working on cloud services, preferably AWS constantly learning and looking for ways to improve yourself and the processes around you can be a team player with strong analytical, communication, and troubleshooting skills. how is life at CRED? working at CRED would instantly make you realize one thing: you are working with the best talent around you. not just in the role you occupy, but everywhere you go. talk to someone around you; most likely you will be talking to a singer, standup comic, artist, writer, an athlete, maybe a magician. at CRED people always have talent up their sleeves. with the right company, even conversations can be rejuvenating. at CRED, we guarantee a good company. hard truths: pushing oneself comes with the role. and we realise pushing oneself is hard work. which is why CRED is in the continuous process of building an environment that helps the team rejuvenate oneself: included but not limited to a stacked, in-house pantry, with lunch and dinner provided for all the team members, paid sick leaves and a comprehensive health insurance. to make things smoother and to make sure you spend time and energy only on the most important things, CRED strives to make every process transparent: there are no work timings because we do not believe in archaic methods of calculating productivity, your work should speak for you. there are no job designations because you will be expected to hold down roles that cannot be described in one word. since trust is a major virtue in the community we have built, we make it a point to highlight it in the community behind CRED: all our employees get their salaries before their joining date. a show of trust that speaks volumes because of the skin in the game. there are many more such eccentricities that make CRED what it is but thats for one to discover. if you feel at home reading this, get in touch.

Posted 3 weeks ago

Apply

1.0 - 3.0 years

10 - 20 Lacs

bengaluru

Work from Office

you should apply if you: having 1-3 years of experience in ETL and data engineering are able to read and write complex SQL have prior experience in at least one programming language are aware of your way around data modeling, data warehousing and lakehouse (we use redshift and databricks) come with experience in working on cloud services, preferably AWS constantly learning and looking for ways to improve yourself and the processes around you can be a team player with strong analytical, communication, and troubleshooting skills. how is life at CRED? working at CRED would instantly make you realize one thing: you are working with the best talent around you. not just in the role you occupy, but everywhere you go. talk to someone around you; most likely you will be talking to a singer, standup comic, artist, writer, an athlete, maybe a magician. at CRED people always have talent up their sleeves. with the right company, even conversations can be rejuvenating. at CRED, we guarantee a good company. hard truths: pushing oneself comes with the role. and we realise pushing oneself is hard work. which is why CRED is in the continuous process of building an environment that helps the team rejuvenate oneself: included but not limited to a stacked, in-house pantry, with lunch and dinner provided for all the team members, paid sick leaves and a comprehensive health insurance. to make things smoother and to make sure you spend time and energy only on the most important things, CRED strives to make every process transparent: there are no work timings because we do not believe in archaic methods of calculating productivity, your work should speak for you. there are no job designations because you will be expected to hold down roles that cannot be described in one word. since trust is a major virtue in the community we have built, we make it a point to highlight it in the community behind CRED: all our employees get their salaries before their joining date. a show of trust that speaks volumes because of the skin in the game. there are many more such eccentricities that make CRED what it is but thats for one to discover. if you feel at home reading this, get in touch.

Posted 3 weeks ago

Apply

1.0 - 3.0 years

10 - 20 Lacs

bengaluru

Work from Office

you should apply if you: having 1-3 years of experience in ETL and data engineering are able to read and write complex SQL have prior experience in at least one programming language are aware of your way around data modeling, data warehousing and lakehouse (we use redshift and databricks) come with experience in working on cloud services, preferably AWS constantly learning and looking for ways to improve yourself and the processes around you can be a team player with strong analytical, communication, and troubleshooting skills. how is life at CRED? working at CRED would instantly make you realize one thing: you are working with the best talent around you. not just in the role you occupy, but everywhere you go. talk to someone around you; most likely you will be talking to a singer, standup comic, artist, writer, an athlete, maybe a magician. at CRED people always have talent up their sleeves. with the right company, even conversations can be rejuvenating. at CRED, we guarantee a good company. hard truths: pushing oneself comes with the role. and we realise pushing oneself is hard work. which is why CRED is in the continuous process of building an environment that helps the team rejuvenate oneself: included but not limited to a stacked, in-house pantry, with lunch and dinner provided for all the team members, paid sick leaves and a comprehensive health insurance. to make things smoother and to make sure you spend time and energy only on the most important things, CRED strives to make every process transparent: there are no work timings because we do not believe in archaic methods of calculating productivity, your work should speak for you. there are no job designations because you will be expected to hold down roles that cannot be described in one word. since trust is a major virtue in the community we have built, we make it a point to highlight it in the community behind CRED: all our employees get their salaries before their joining date. a show of trust that speaks volumes because of the skin in the game. there are many more such eccentricities that make CRED what it is but thats for one to discover. if you feel at home reading this, get in touch.

Posted 3 weeks ago

Apply

2.0 - 5.0 years

10 - 20 Lacs

hyderabad

Work from Office

We are seeking Data Analysts / Data Engineers with experience in U.S. pharmaceutical commercial data . Key Responsibilities: Ingest and onboard third-party data sources with appropriate quality and compliance measures and reporting Design and implement QC protocols for data integrity and completeness. Track data lineage and maintain proper documentation of data flow and transformations. Apply statistical or algorithmic techniques to identify anomalies in data related to sales, claims, or patient-level records. Provide detailed reports and insights for stakeholders to support commercial decision- making. Required Skills & Qualifications: 2+ years of experience in data analytics / engineering with at least one year with US Pharmaceutical Data Hands-on experience with third-party commercial healthcare data sources ( IQVIA, Symphony, Komodo, Definitive etc.). Experience working with ETL pipelines, data ingestion, and metadata . Proficient in SQL, or Python Understanding of o utlier detection techniques using statistical and ML-based approaches.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies