Jobs
Interviews

458 Etl Pipelines Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 7.0 years

9 - 24 Lacs

noida

Work from Office

Responsibilities: * Design, develop & maintain ETL pipelines using Java & design patterns. * Collaborate with cross-functional teams on API integrations & web services implementation Health insurance Provident fund Shift allowance

Posted 2 weeks ago

Apply

3.0 - 8.0 years

10 - 20 Lacs

jaipur

Work from Office

Job Title: Data Engineer Experience: 3-8 Years Location: Jaipur, India Employment Type: Full-time About the Role: We are looking for a skilled and motivated Data Engineer to join our team in Jaipur. The ideal candidate will have strong experience in designing, building, and optimizing data pipelines and architectures for large-scale data processing. You will work closely with data scientists, analysts, and business stakeholders to ensure smooth data flow and accessibility across systems. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT pipelines to support data integration and analytics. Work with structured, semi-structured, and unstructured data across multiple platforms. Optimize data workflows for performance, scalability, and reliability . Collaborate with cross-functional teams (Data Science, BI, Business teams) to deliver data solutions. Implement data quality checks, monitoring, and governance to ensure accuracy and security. Manage data pipelines on cloud platforms (AWS, Azure, or GCP) and modern data tools. Troubleshoot, debug, and resolve issues in existing data workflows. Required Skills & Qualifications: 3-8 years of experience as a Data Engineer or in a similar role. Strong expertise in SQL and relational databases (e.g., MySQL, PostgreSQL, SQL Server). Hands-on experience with Big Data technologies like Spark, Hadoop, or Kafka. Proficiency in Python, Scala, or Java for data processing. Experience with Cloud Data Platforms (Azure Data Factory, Databricks, AWS Glue, GCP BigQuery, etc.). Knowledge of data warehousing concepts and dimensional modeling. Familiarity with modern data stack tools (dbt, Airflow, Snowflake, Redshift, Synapse). Strong problem-solving and debugging skills. Good to Have: Experience in CI/CD pipelines and DevOps for data workflows. Exposure to machine learning data pipelines . Knowledge of data security and compliance practices. Education: Bachelors or Masters degree in Computer Science, Information Technology, Data Engineering, or related field.

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

As a Data Lead at GlobalLogic, you will be part of a high-impact team dedicated to developing innovative search features for a leading search engine. Your role will involve enhancing user experience by delivering smarter and faster search results, such as presenting relevant information directly on the search results page to minimize the need for users to navigate away. Collaboration with cross-functional teams will be essential, with each team contributing to different aspects of search innovation. This position offers both technical challenges and rewards, making it ideal for individuals with expertise in Data Analysis, Python, SQL, Unix/Linux, and ETL pipelines, along with proven leadership skills. Requirements: - Bachelor's degree in B.E. / B.Tech / MCA - 5-7 years of experience - Strong knowledge in Python, ETL, and SQL - Experience in Linux/UNIX production environments - Deep understanding of performance metrics - Analytical, collaborative, and solution-oriented mindset - Strong time management, communication, and problem-solving skills Responsibilities: - Resolve team's technical challenges and provide hands-on support - Define and manage KPIs and S.M.A.R.T. goals - Delegate tasks and ensure timely project delivery - Conduct performance reviews and identify areas for improvement - Foster a motivating and inclusive work environment - Coach and mentor team members - Promote continuous learning and skill development - Facilitate team-building initiatives and ensure smooth collaboration At GlobalLogic, we offer: - A culture of caring that prioritizes people and fosters inclusivity - Commitment to continuous learning and development opportunities - Interesting and meaningful work on impactful projects - Balance and flexibility to achieve work-life integration - High-trust organization with integrity as a core value About GlobalLogic: GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to leading companies worldwide. Since 2000, we have been driving the digital revolution by creating innovative digital products and experiences. Our collaboration with clients aims to transform businesses and redefine industries through intelligent solutions.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

The Senior Analyst, Data & Marketing Analytics, plays a crucial role in establishing Bain's marketing analytics ecosystem. This position presents an exciting opportunity for a seasoned analyst to expand their scope and contribute to strategic outcomes effectively. Your primary responsibility is to actively participate in the development of data infrastructure, generation of insights, and facilitation of scalable reporting, all while collaborating closely with marketing, digital, and technology teams. From crafting data pipelines and dashboards to overseeing agile projects and leading important discussions, this role empowers you to influence how analytics drives strategic marketing decisions at Bain. You will excel in a dynamic, fast-paced setting and engage in extensive collaboration with stakeholders throughout the marketing and analytics landscape. Your duties will include: Data Analytics & Insight Generation (30%) - Conduct in-depth analysis of marketing, digital, and campaign data to identify trends and provide actionable insights. - Support performance evaluation, experimentation, and strategic decision-making across the marketing funnel. - Translate business inquiries into well-structured analyses and data-driven narratives. Data Infrastructure & Engineering (30%) - Develop and manage scalable data pipelines and workflows using SQL, Python, and Databricks. - Establish and enhance a marketing data lake by integrating APIs and data from various platforms and tools. - Engage with cloud environments (Azure, AWS) to ensure analytics-ready data at scale. Project & Delivery Ownership (25%) - Lead projects or serve as a scrum owner for analytics initiatives by planning sprints, overseeing delivery, and fostering alignment. - Utilize tools like JIRA to coordinate work in an agile environment and ensure prompt execution. - Collaborate with cross-functional teams to synchronize priorities and execute roadmap initiatives. Visualization & Platform Enablement (15%) - Construct impactful dashboards and data products using Tableau, emphasizing usability, scalability, and performance. - Facilitate stakeholder self-service through well-structured data architecture and visualization best practices. - Explore emerging tools and capabilities, such as GenAI for assisted analytics. Experience - Minimum of 5 years" experience in data analytics, digital analytics, or data engineering, preferably in a marketing or commercial context. - Proficient in SQL, Python, and tools like Databricks, Azure, or AWS. - Demonstrated expertise in establishing and managing data lakes, ETL pipelines, and API integrations. - Strong command of Tableau; familiarity with Tableau Prep is advantageous. - Knowledge of Google Analytics (GA4), GTM, and social media analytics platforms. - Experience in agile team environments, with proficiency in JIRA for sprint planning and delivery. - Exposure to predictive analytics, modeling, and GenAI applications is beneficial. - Exceptional communication and storytelling skills, capable of leading important meetings and delivering clear insights to senior stakeholders. - Excellent organizational and project management capabilities, comfortable managing conflicting priorities. - Detail-oriented, ownership-driven mindset, and a collaborative, results-oriented approach.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a Business Analytics/Product Analytics professional, you will be responsible for developing an in-depth understanding of user journeys on Airtel Digital Channels. Your primary objective will be to generate data-driven insights and recommendations to assist the product business in making meticulous decisions. You will be expected to take end-to-end ownership of key metrics and collaborate with product owners to ensure that the metrics are moving in the desired direction. Your role will involve developing strong hypotheses, conducting A/B experiments, and confidently identifying areas of opportunity. Working cross-functionally, you will define problem statements, gather data, build analytical models, and provide recommendations based on your findings. You will also be tasked with implementing efficient processes for data reporting, dashboarding, and communication. To excel in this role, you should possess a B.Tech/BE degree in Computer Science or a related technical field, or have a background in Statistics/Operations Research. With at least 3 years of experience in core business analytics or product analytics, you should have a deep understanding of SQL and Python/R. Proficiency in data visualization tools like Tableau and Superset is essential, along with hands-on experience working with large datasets. Your problem-solving skills will be put to the test as you engage senior management with data and work on data structures, ETL pipelines, and feature modeling. Familiarity with tools such as Clickstream, Google Analytics, and Moengage will be advantageous. Effective communication skills and the ability to collaborate with multiple teams, including data engineering and product squads, will further enhance your profile. If you are looking for a challenging opportunity that allows you to leverage your analytical skills and drive impactful decisions through data-driven insights, this role is ideal for you. Join our team and be part of a dynamic environment where your expertise will play a crucial role in shaping the future of Airtel Digital Channels.,

Posted 2 weeks ago

Apply

10.0 - 15.0 years

0 Lacs

karnataka

On-site

As the Head of Data at our Bengaluru (Bellandur) location, you will be responsible for leading the design, development, and optimization of ETL/ELT pipelines to process structured and unstructured data. You will play a key role in maintaining a modern data warehouse/lakehouse architecture, implementing data partitioning, indexing, and performance tuning strategies, and ensuring robust cataloging, data quality rules, validation checks, and monitoring systems are in place. Additionally, you will define appropriate data privacy and security guidelines and take ownership of business metrics from extracting and exploring data to evaluating methods and results rigorously. Collaborating with stakeholders, you will define and implement key business metrics, establish real-time monitoring for critical KPIs, and build and maintain BI dashboards automation with consistency and accuracy in reporting. Your role will involve performing A/B testing, generating and validating business hypotheses, designing analytical frameworks for product funnels and biz ops, and empowering business teams with self-service reporting and data crunching. Furthermore, you will be responsible for building predictive models such as churn prediction, demand forecasting, route planning, and fraud detection. Developing Advanced ML solutions like recommendation systems, image processing, and Gen AI applications will be part of your responsibilities. You will deploy machine learning models into production, monitor models, update them, and ensure proper documentation. In collaboration with product managers, engineering, risk, marketing, and finance teams, you will align analytics with business goals. Building and mentoring a high-performing analytics team, defining the analytics vision and strategy, and translating it into actionable roadmaps and team initiatives will be crucial. Presenting actionable insights to senior executives and stakeholders, solving ambiguous problems through a structured framework, and executing high-impact work are key aspects of this role. To qualify for this position, you should have 10-12 years of experience with at least 6 years in delivering analytical solutions in B2C consumer internet companies. A Bachelor's Degree in a data-related field such as Industrial Engineering, Statistics, Economics, Math, Computer Science, or Business is required. Strong expertise in Python, R, TensorFlow, PyTorch, and Scikit-learn, as well as hands-on experience with AWS, GCP, or Azure for ML deployment, is essential. Proven experience in Power BI report development, ETL pipelines, data modeling, and distributed systems is also necessary. Expertise in supply chain processes and demand forecasting, along with skills in time series models and regression analysis, will be beneficial. Strong documentation skills and the ability to explain complex technical concepts to non-technical personnel are also required for this role.,

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

india

On-site

The Oracle Cloud Infrastructure (OCI) Security and Compliance Platform Engineering organization presents a rare opportunity to contribute to the development of next-generation, AI-driven cybersecurity solutions at cloud scale. This effort centers on ingesting and processing massive volumes of telemetry and security event data across OCI, leveraging advanced techniques including generative AI (GenAI), large language models (LLMs), and machine learning (ML) to build intelligent detection, response, and mitigation systems. The goal is to deliver autonomous, adaptive security capabilities that protect OCI, Oracle, and our global customer base against evolving threat landscapes. Inviting you to build along the high scale, low latency, distributed systems including massive data pipelines and database. Hands-on and seasoned engineer who can design and drive end to end engineering efforts (incld design, development, test infrastructure, operational excellence) Resolve complex technical issues and make design decisions to meet the critical requirements of this scalable, highly available, secure multi-tenant enablement of services in cloud. Mentor and guide junior members in the team on the technological front. Work closely withall the stakeholders including the Other technical Leads, Director, Engineering manager,architects, product, and program managers to deliver product features on time and with high quality. Proactively identify and resolve risks and issues that may dent the team's ability to execute. Work with various external (application) teams integration with the product and help guide the integration. Understand various Cloud technologies in Oracle to help evolve the cloud provisioning and enablement process on a continuous basis. Must-have Skills BS/MS degree or equivalent in related technical field involving coding or equivalent practical experience with 5+ years of overall experience Experience in building and designing microservices and/or cloud native applications. Either strong on databases front or on building big data systems (including ETL pipelines) Being aproblem solver with strong can-do attitude and ability to think on the go would be critical for success on this role. Strong fundamentals on OS, networks, distributed systems, designing fault tolerant and high available systems. Strong on at least one of the modern programming languages (Java, Kotlin, Python, C#) along with containers experiences (likes of Docker/Kubernetes).Demonstrated ability to adapt to new technologies and learn quickly. Must be detail-oriented (critical and considerate eye for detail), task-driven and have excellent communication skills. Be organized and goal-focused, ability to deliver in a fast-paced environment with minimal supervision. Strong, creative problem-solving skills and ability to abstract and share details to create meaningful articulation. Preferred Skills or Nice-to-have Skills Experience with Architectural patterns for High Availability, Performance, Scale Out architecture, Disaster Recovery, Security Architecture Knowledge of cloud-based architectures, deployment and operational aspects of cloud set up is a plus. Exposure to at least 1 cloud service provider (AWS/OCI/Azure/GCP etc.) would be a good advantage Experience in implementing container monitoring tools like Prometheus/Grafana, CI/CD pipelines (Jenkins, GitLab etc.), using/creating build tools (Gradle, Ant, Maven, or similar) Career Level - IC3

Posted 2 weeks ago

Apply

10.0 - 15.0 years

1 - 5 Lacs

bengaluru

Work from Office

About The Role Project Role : Infra Tech Support Practitioner Project Role Description : Provide ongoing technical support and maintenance of production and development systems and software products (both remote and onsite) and for configured services running on various platforms (operating within a defined operating model and processes). Provide hardware/software support and implement technology at the operating system-level across all server and network areas, and for particular software solutions/vendors/brands. Work includes L1 and L2/ basic and intermediate level troubleshooting. Must have skills : Oracle Database Architecture Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :We are seeking a highly experienced Senior Oracle/EDW Migration Lead to drive and manage enterprise-scale migrations of legacy Oracle Data Warehouses (EDW) to modern data platforms, including cloud-native solutions. The ideal candidate will have a deep understanding of Oracle database technologies, data modeling, ETL processes, and enterprise data strategies.You will be responsible for leading end-to-end data migration efforts, collaborating with business, infrastructure, and analytics teams to ensure successful transitions with minimal impact to operations.Roles and responsibilities:- Lead migration initiatives from Oracle EDW (11g/12c/19c) to target platforms like Snowflake, Databricks, Azure Synapse, GCP BigQuery, or AWS Redshift.- Define and execute data migration strategies, roadmaps, and risk mitigation plans.- Oversee data profiling, cleansing, validation, and reconciliation efforts during migration.- Assess existing Oracle EDW schemas, PL/SQL logic, and ETL/ELT processes (Informatica, ODI, etc.).- Create target data architecture including fact/dimension modeling, partitioning, indexing, and performance design.- Support schema redesign and optimization to fit cloud-native or modern warehouse solutions.- Collaborate with DBAs, cloud engineers, ETL developers, and business analysts throughout the migration lifecycle.- Define data cutover plans, rollback strategies, and UAT test scenarios.- Ensure minimal downtime and high data integrity during go-live events.- Implement controls for data lineage, data masking, encryption, and GDPR/HIPAA compliance during migration.- Align with enterprise data governance policies and metadata management.- Identify opportunities for performance improvement during and post-migration.- Assist in tuning Oracle queries and stored procedures for performance parity on the target platform.- Act as the primary point of contact for program leadership, business sponsors, and IT teams.- Provide regular migration status updates, executive dashboards, and risk escalations.Professional and Technical skills:- 10+ years of experience in Oracle Data Warehousing, including PL/SQL, partitioning, indexing, and tuning- 35 years of experience in data warehouse migration or re-platforming projects- Strong understanding of data modeling (dimensional/star schema), ETL pipelines, and data validation- Experience with cloud data warehouse platforms like Snowflake, Databricks, Azure Synapse, BigQuery, or Redshift- Familiarity with Informatica, ODI, Talend, or native cloud data movement tools (ADF, DMS, etc.)- Proven experience in leading teams, managing project delivery, and working with global stakeholders- Oracle certifications (e.g., Oracle Certified Professional (OCP))- Cloud certifications:Azure Data Engineer, AWS Big Data Specialty, or GCP Data Engineer- Hands-on experience with data lakehouse or delta architecture- Experience in DevOps for data pipelines or CI/CD automation for ETL processes- Strong analytical and problem-solving skills- Excellent communication and stakeholder engagement- Attention to detail in data accuracy and business continuity- Proactive leadership and ability to work in a fast-paced environmentAdditional information:- The candidate should have minimum 3 years of experience.- The position is at our Gurugram office.- A 15 year full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

india

On-site

JOB DESCRIPTION Develop, test, and deploy data processing applications using Apache Spark and Scala. Optimize and tune Spark applications for better performance on large-scale data sets. Work with the Cloudera Hadoop ecosystem (e.g., HDFS, Hive, Impala, HBase, Kafka) to build data pipelines and storage solutions. Collaborate with data scientists, business analysts, and other developers to understand data requirements and deliver solutions. Design and implement high-performance data processing and analytics solutions. Ensure data integrity, accuracy, and security across all processing tasks. Troubleshoot and resolve performance issues in Spark, Cloudera, and related technologies. Implement version control and CI/CD pipelines for Spark applications. Required Skills & Experience: Minimum 8 years of experience in application development. Strong hands on experience in Apache Spark, Scala, and Spark SQL for distributed data processing. Hands-on experience with Cloudera Hadoop (CDH) components such as HDFS, Hive, Impala, HBase, Kafka, and Sqoop. Familiarity with other Big Data technologies, including Apache Kafka, Flume, Oozie, and Nifi. Experience building and optimizing ETL pipelines using Spark and working with structured and unstructured data. Experience with SQL and NoSQL databases such as HBase, Hive, and PostgreSQL. Knowledge of data warehousing concepts, dimensional modeling, and data lakes. Ability to troubleshoot and optimize Spark and Cloudera platform performance. Familiarity with version control tools like Git and CI/CD tools (e.g., Jenkins, GitLab).

Posted 2 weeks ago

Apply

6.0 - 11.0 years

20 - 35 Lacs

hyderabad

Work from Office

Senior/Lead Data Engineer (Warehouse Architecture & PySpark) Location: Hyderabad Experience: 6-12 years | Team: Data Platform / Analytics Engineering The opportunity Own the end-to-end build of a modern analytics foundationdesign clean warehouse models, craft high-quality PySpark transformations, and ship reliable pipelines feeding BI/ML at scale. What you’ll do Model the warehouse: Define dimensional/star schemas (SCD1/2, snapshots), conformed dimensions, and clear grains across core domains. Author robust transformations: Build performant PySpark jobs for batch/near-real-time; handle nested JSON, schema evolution, late data, and idempotent re-runs. Ingestion & CDC: Operate change-data-capture from relational and document stores; incremental patterns, backfills, and auditability. Orchestrate & automate: Own DAGs, retries, SLAs, and deployments in a modern scheduler with infra-as-code and CI/CD. Quality, lineage, observability: Freshness/uniqueness/RI tests, lineage, monitoring for drift/skew and job SLAs. Performance & cost: Partitioning, clustering/sort, join strategies, file sizing, columnar formats, workload management. Security & governance: RBAC, masking/tokenization for PII, data contracts with producers/consumers. Partner across functions: Work with product/engineering/finance/analytics to define SLAs, KPIs, and domain boundaries. Must-have qualifications Warehouse modeling depth: 5+ years designing dimensional models at multi-TB scale; strong with grains, surrogate keys, SCD2 , snapshots. PySpark expertise: Solid grasp of Spark execution (shuffles, skew mitigation, AQE), windowing, and UDF/UDTF trade-offs. Pipelines from mixed sources: Ingest from RDBMS and document/NoSQL systems; handle nested structures and schema evolution. Cloud DW proficiency: Hands-on with a Redshift-class warehouse and lake/lakehouse table formats ( Parquet/Delta/Iceberg ). Orchestration & CI/CD: Production experience with an Airflow-class scheduler, Git workflows, environment promotion, and IaC (Terraform/CDK). Data quality & lineage: Practical use of Great Expectations/Deequ (or equivalent) and lineage tooling; incident prevention mindset. Streaming & CDC: Production experience with Kafka-class streams and Debezium-style CDC (topics/partitions, offset mgmt, schema registry, compaction). Semantic layer & ELT: Working experience with dbt (or equivalent) and a metrics/semantic layer (e.g., MetricFlow/LookML-style). Cost governance: Workload management, queue/WLM tuning, and price/perf optimization in cloud DWs. Privacy & compliance: Exposure to GDPR/DPDP concepts; secure-by-design patterns for PII. Ownership & communication: Clear docs/design reviews; ability to translate ambiguous asks into resilient datasets. . Interview signals Can sketch a star schema from messy OLTP + event streams and justify grain/keys/SCD choices. Reads a Spark plan and explains shuffle boundaries + a concrete skew fix. Describes Kafka + Debezium CDC patterns (outbox, schema evolution, retries, exactly-once/at-least-once trade-offs). Shows dbt modeling discipline (naming, tests, exposures, contracts) and how it fits with PySpark transforms. Demonstrates a real cost/perf win (partitioning/sort keys/file sizing/WLM) with before/after metrics. Tech familiarity PySpark , SQL; Redshift-class warehouses; Airflow-class orchestration; Kafka-class streaming; Debezium-style CDC; dbt + a semantic/metrics layer; Parquet/Delta/Iceberg; Great Expectations/Deequ; Terraform/CDK; GitLab/GitHub CI. How to apply: Send your resume plus a short portfolio: (1) a warehouse you modeled (diagram + 1–2 marts, grain & SCD notes), (2) a Kafka+CDC pipeline you built (architecture + failure recovery), and (3) one PySpark optimization that saved real cost/time. to ceo@akriviahcm.com

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

india

On-site

DESCRIPTION Amazon is seeking a Business Intelligence Engineer, Customer Experience Strategy based in Bangalore, India. Our team inspects, measures, and drives improvements to Amazon's customer experience, partnering with senior leaders across the organization. The ideal candidate will be highly analytical and demonstrate advanced problem solving, analysis, modeling, and reporting skills. You will be passionate about analyzing large data sets, leveraging multiple sources, and exploring new analytical tools to answer complex business questions. You will also be able to independently plan scope of a project, understand the problem statement, construct the analysis, identify data sources and execute the study end to end. The successful candidate will have excellent communication skills, working with business and technical stakeholders. Key job responsibilities . Build new analytical tools, interfacing with business and technical stakeholders . Develop models and metrics providing insights on customer experience and performance . Interface with other teams to extract, transform and load data . Drive process efficiency and automation . Drive adoption of analytical tools across business stakeholders . Drive meaningful insights triangulating across multiple data sources and share actionable findings with stakeholders About the team Customer Experience and Business Trends (CXBT) is an organization made up of a diverse suite of functions dedicated to deeply understanding and improving customer experience, globally. We are a team of builders that develop products, services, ideas, and various ways of leveraging data to influence product and service offerings for almost every business at Amazon - for every customer (e.g., consumers, developers, sellers/brands, employees, investors, streamers, gamers). Our approach is based on determining the customer needs, along with problem solving, and we work backwards from there. We use technical and non-technical approaches and stay aware of industry and business trends. We are a global team, made up of a diverse set of profiles, skills, and backgrounds - including: Product Managers, Software Developers, Computer Vision experts, Solution Architects, Data Scientists, Business Intelligence Engineers, Business Analysts, Risk Managers, and more. BASIC QUALIFICATIONS - 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling PREFERRED QUALIFICATIONS - Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift - Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets - Preferred Experience with Power BI, NLQ/Gen AI supported queries and ability to understand model evaluation (including understanding confusion matrix) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

india

On-site

DESCRIPTION When you attract people who have the DNA of pioneers and the DNA of explorers, you build a company of like-minded people who want to invent. And that's what they think about when they get up in the morning: how are we going to work backwards from customers and build a great service or a great product - Jeff Bezos Amazon.com's success is built on a foundation of customer obsession. Have you ever thought about what it takes to successfully deliver millions of packages to Amazon customers seamlessly every day like a clock work In order to make that happen, behind those millions of packages, billions of decision gets made by machines and humans. What is the accuracy of customer provided address Do we know exact location of the address on Map Is there a safe place Can we make unattended delivery Would signature be required If the address is commercial property Do we know open business hours of the address What if customer is not home Is there an alternate delivery address Does customer have any special preference What are other addresses that also have packages to be delivered on the same day Are we optimizing delivery associate's route Does delivery associate know locality well enough Is there an access code to get inside building And the list simply goes on. At the core of all of it lies quality of underlying data that can help make those decisions in time. The person in this role will be a strong influencer who will ensure goal alignment with Technology, Operations, and Finance teams. This role will serve as the face of the organization to global stakeholders. This position requires a results-oriented, high-energy, dynamic individual with both stamina and mental quickness to be able to work and thrive in a fast-paced, high-growth global organization. Excellent communication skills and executive presence to get in front of VPs and SVPs across Amazon will be imperative. Key Strategic Objectives: Amazon is seeking an experienced leader to own the vision for quality improvement through global address management programs. As a Business Intelligence Engineer of Amazon last mile quality team, you will be responsible for shaping the strategy and direction of customer-facing products that are core to the customer experience. As a key member of the last mile leadership team, you will continually raise the bar on both quality and performance. You will bring innovation, a strategic perspective, a passionate voice, and an ability to prioritize and execute on a fast-moving set of priorities, competitive pressures, and operational initiatives. You will partner closely with product and technology teams to define and build innovative and delightful experiences for customers. You must be highly analytical, able to work extremely effectively in a matrix organization, and have the ability to break complex problems down into steps that drive product development at Amazon speed. You will set the tempo for defect reduction through continuous improvement and drive accountability across multiple business units in order to deliver large scale high visibility/ high impact projects. You will lead by example to be just as passionate about operational performance and predictability as you will be about all other aspects of customer experience. The successful candidate will be able to: - Effectively manage customer expectations and resolve conflicts that balance client and company needs. - Develop process to effectively maintain and disseminate project information to stakeholders. - Be successful in a delivery focused environment and determining the right processes to make the team successful. - This opportunity requires excellent technical, problem solving, and communication skills. The candidate is not just a policy maker/spokesperson but drives to get things done. - Possess superior analytical abilities and judgment. Use quantitative and qualitative data to prioritize and influence, show creativity, experimentation and innovation, and drive projects with urgency in this fast-paced environment. - Partner with key stakeholders to develop the vision and strategy for customer experience on our platforms. Influence product roadmaps based on this strategy along with your teams. - Support the scalable growth of the company by developing and enabling the success of the Operations leadership team. - Serve as a role model for Amazon Leadership Principles inside and outside the organization - Actively seek to implement and distribute best practices across the operation BASIC QUALIFICATIONS - 3+ years of analyzing and interpreting data with Redshift, Oracle, NoSQL etc. experience - Experience with data visualization using Tableau, Quicksight, or similar tools - Experience with data modeling, warehousing and building ETL pipelines - Experience writing complex SQL queries - Experience in Statistical Analysis packages such as R, SAS and Matlab - Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling PREFERRED QUALIFICATIONS - Experience with AWS solutions such as EC2, DynamoDB, S3, and Redshift - Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

3.0 - 5.0 years

0 Lacs

india

On-site

DESCRIPTION Would you like to work on one of the world's largest transactional distributed systems How about working with customers and peers from the entire range of Amazon's business on cool new features Whether you're passionate about building highly scalable and reliable systems or a software developer who likes to solve business problems, Selling Partner Services (SPS) is the place for you. Our team is responsible for Case Management System. We are looking for data engineers who thrive on complex problems and solve for operating complex and mission critical systems under high loads. Our systems manage case resolution systems with hundreds of millions of requests, and respond to millions of service requests. We have great data engineering and science opportunities. We are aimed to provide customizable and LLM based solution to our clients. Do you think you are up for this challenge Or would you like to learn more and stretch your skills and career The successful candidate is expected to contribute to all parts of the data engineering and deployment lifecycle, including design, development, documentation, testing and maintenance. They must possess good verbal and written communication skills, be self-driven and deliver high quality results in a fast paced environment. You will thrive in our collaborative environment, working alongside accomplished engineers who value teamwork and technical excellence. We're looking for experienced technical leaders. Key job responsibilities 1. Design/implement automation and manage our massive data infrastructure to scale for the analytics needs of case management. 2. Build solutions to achieve BAA(Best At Amazon) standards for system efficiency, IMR efficiency, data availability, consistency & compliance. 3. Enable efficient data exploration, experimentation of large datasets on our data platform and implement data access control mechanisms for stand-alone datasets 4. Design and implement scalable and cost effective data infrastructure to enable Non-IN(Emerging Marketplaces and WW) use cases on our data platform 5. Interface with other technology teams to extract, transform, and load data from a wide variety of data sources using SQL, Amazon and AWS big data technologies 6. Must possess strong verbal and written communication skills, be self-driven, and deliver high quality results in a fast-paced environment. 7. Drive operational excellence strongly within the team and build automation and mechanisms to reduce operations 8. Enjoy working closely with your peers in a group of very smart and talented engineers. BASIC QUALIFICATIONS - 3+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience in at least one modern scripting or programming language, such as Python, Java, Scala, or NodeJS - Experience as a data engineer or related specialty (e.g., software engineer, business intelligence engineer, data scientist) with a track record of manipulating, processing, and extracting value from large datasets PREFERRED QUALIFICATIONS - Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions - Experience building large-scale, high-throughput, 24x7 data systems - Experience with big data technologies such as: Hadoop, Hive, Spark, EMR - Experience providing technical leadership and mentoring other engineers for best practices on data engineering Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit for more information. If the country/region you're applying in isn't listed, please contact your Recruiting Partner.

Posted 2 weeks ago

Apply

6.0 - 9.0 years

15 - 25 Lacs

new delhi, gurugram, delhi / ncr

Hybrid

Key Responsibilities: Design, develop, and maintain highly scalable ETL pipelines for telecom data processing. Implement data ingestion, transformation, and integration workflows using Python and PySpark . Work with AWS cloud services (S3, Glue, EMR, Redshift, Athena, Lambda, etc.) to manage and optimize big data solutions. Collaborate with product teams, data scientists, and business stakeholders to deliver data-driven solutions for network optimization, customer analytics, and business insights. Ensure data quality, reliability, and governance in all data systems. Optimize data pipelines for scalability, performance, and cost-efficiency in a telecom-scale environment. Monitor and troubleshoot data workflows, ensuring minimal downtime and high availability. Required Skills & Qualifications: 6 to 8 years of proven experience as a Data Engineer . Strong hands-on expertise in Python and PySpark . Advanced knowledge of AWS data ecosystem (S3, Glue, EMR, Lambda, Redshift, Athena). Solid experience in designing and maintaining ETL pipelines and big data workflows. Strong knowledge of data warehousing concepts, big data frameworks, and distributed data processing . Proficiency in SQL and experience with performance tuning. Familiarity with CI/CD processes and version control (Git). Excellent problem-solving, communication, and collaboration skills. Preferred candidate profile Notice period : 1 Month or less / Currently serving

Posted 2 weeks ago

Apply

8.0 - 15.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Architecture professional with 8-15 years of experience, your primary responsibility will be to design and implement data-centric solutions on Google Cloud Platform (GCP). You will utilize various GCP tools such as Big Query, Google Cloud Storage, Cloud SQL, Memory Store, Dataflow, Dataproc, Artifact Registry, Cloud Build, Cloud Run, Vertex AI, Pub/Sub, and GCP APIs to create efficient and scalable solutions. Your role will involve building ETL pipelines to ingest data from diverse sources into our system and developing data processing pipelines using programming languages like Java and Python for data extraction, transformation, and loading (ETL). You will be responsible for creating and maintaining data models to ensure efficient storage, retrieval, and analysis of large datasets. Additionally, you will deploy and manage both SQL and NoSQL databases like Bigtable, Firestore, or Cloud SQL based on project requirements. Your expertise will be crucial in optimizing data workflows for performance, reliability, and cost-effectiveness on the GCP infrastructure. Version control and CI/CD practices for data engineering workflows will be implemented by you to ensure reliable and efficient deployments. You will leverage GCP monitoring and logging tools to proactively identify and address performance bottlenecks and system failures. Troubleshooting and resolving issues related to data processing, storage, and retrieval will be part of your daily tasks. Addressing code quality issues throughout the development lifecycle using tools like SonarQube, Checkmarx, Fossa, and Cycode will also be essential. Implementing security measures and data governance policies to maintain the integrity and confidentiality of data will be a critical aspect of your role. Collaboration with stakeholders to gather and define data requirements aligned with business objectives is key to success. You will be responsible for developing and maintaining documentation for data engineering processes to facilitate knowledge transfer and system maintenance. Participation in on-call rotations to address critical issues and ensure the reliability of data engineering systems will be required. Furthermore, providing mentorship and guidance to junior team members to foster a collaborative and knowledge-sharing environment will be an integral part of your role as a Data Architecture professional.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

You will be a skilled Celonis Consultant at OFI Services, based in Bengaluru. Your expertise in Process Mining, Automation, and Robotic Process Automation (RPA) will be crucial in driving process optimization and digital transformation for clients. As a part of our team, you will be responsible for implementing Celonis solutions to enhance operational efficiency. Collaborating closely with clients, you will identify process inefficiencies, create dashboards, and deliver actionable insights. Your key responsibilities will include developing and implementing Celonis Process Mining solutions to uncover inefficiencies and improvement opportunities. You will also be optimizing ETL pipelines for data accuracy and integrity while handling data from enterprise systems like SAP, Salesforce, and ServiceNow. Collaborating with various teams, you will translate business requirements into technical solutions within Celonis. Additionally, you will be responsible for performance tuning by optimizing SQL queries and troubleshooting Celonis data pipelines for enhanced reliability. In this role, you can expect to work in a dynamic and international environment that prioritizes innovation and impact. You will have the opportunity to engage with cutting-edge technologies and global clients. Furthermore, we offer career development and training opportunities in automation, AI, and consulting. Competitive compensation and flexible working arrangements are part of the benefits package associated with this role.,

Posted 2 weeks ago

Apply

2.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

As an AI Architect at Novartis Healthcare Private Limited, you will be responsible for designing and implementing cutting-edge Generative AI (GenAI) solutions specifically tailored for the pharmaceutical and life sciences industry. Your role will involve defining AI strategies, selecting and fine-tuning generative AI models, and ensuring seamless integration with enterprise systems to enhance various aspects such as drug discovery, clinical trials, regulatory compliance, and patient-centric solutions. Collaboration with data scientists, engineers, and business stakeholders will be essential to develop scalable, compliant, and high-performance AI applications leveraging large language models (LLMs), multimodal AI, and AI-driven automation. Your key responsibilities will include defining and implementing a generative AI architecture and roadmap aligned with business goals in pharma and life sciences, architecting scalable GenAI solutions for various applications in the industry, working on the development, fine-tuning, and optimization of large language models (LLMs), designing GenAI solutions leveraging cloud platforms or on-premise infrastructure while ensuring data security and regulatory compliance, implementing best practices for GenAI model deployment, monitoring, and lifecycle management within GxP-compliant environments, ensuring compliance with regulatory standards and responsible AI principles, driving efficiency in generative AI models, collaborating with cross-functional teams, staying updated with the latest advancements in GenAI, and incorporating emerging technologies into the company's AI strategy. To be successful in this role, you are required to have a Bachelor's or Master's degree in Computer Science, AI, Data Science, Bioinformatics, or a related field, along with 8+ years of experience in AI/ML development with at least 2 years in an AI Architect or GenAI Architect role in pharma, biotech, or life sciences. Technical expertise in Generative AI, large language models (LLMs), multimodal AI, deep learning, AI/ML frameworks, data engineering, ETL pipelines, programming languages, cloud AI services, MLOps, DevOps, regulatory & ethical AI, and problem-solving skills are essential. Excellent communication and leadership skills are also required to articulate GenAI strategies and solutions to technical and non-technical stakeholders. Preferred qualifications include experience in GenAI applications for medical writing, automated clinical trial protocols, drug discovery, and regulatory intelligence, knowledge of AI explainability, retrieval-augmented generation (RAG), knowledge graphs, and synthetic data generation in life sciences, AI/ML certifications, understanding of biomedical ontologies, semantic AI models, and federated learning, exposure to fine-tuning LLM models, and familiarity with specific AI models like BioGPT, PubMedBERT, and custom SLMs. Novartis is committed to fostering an inclusive work environment and building diverse teams that are representative of the patients and communities the company serves. If you require accommodation during the recruitment process due to a medical condition or disability, you can reach out to [email protected] with your request and contact information. Join Novartis in creating a brighter future together and explore the opportunities within the Novartis Network for suitable career options.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

delhi

On-site

The Senior Analyst, Data & Marketing Analytics, will play a crucial role in establishing the foundation of Bain's marketing analytics ecosystem. This position presents an enriching opportunity for a seasoned analyst to expand their scope of responsibilities and contribute to strategic outcomes. Your primary responsibilities will involve actively participating in designing data infrastructure, generating insights, and facilitating scalable reporting, all while collaborating closely with marketing, digital, and technology stakeholders. From constructing data pipelines and dashboards to overseeing agile projects and leading critical discussions, this role offers you the chance to influence how analytics drives strategic marketing at Bain. You will excel in an agile, fast-paced setting and engage in close collaboration with stakeholders across the marketing and analytics ecosystem. Responsibilities Data Analytics & Insight Generation (30%) - Analyze marketing, digital, and campaign data to identify patterns and provide actionable insights. - Support performance evaluation, experimentation, and strategic decision-making throughout the marketing funnel. - Transform business inquiries into structured analyses and data-driven narratives. Data Infrastructure & Engineering (30%) - Develop and sustain scalable data pipelines and workflows utilizing SQL, Python, and Databricks. - Construct and enhance a marketing data lake by integrating APIs and data from various platforms and tools. - Operate within cloud environments (Azure, AWS) to uphold analytics-ready data at scale. Project & Delivery Ownership (25%) - Act as a project lead or scrum owner for analytics initiatives by organizing sprints, overseeing delivery, and fostering alignment. - Utilize tools like JIRA to manage tasks in an agile environment and ensure prompt execution. - Coordinate with cross-functional teams to synchronize priorities and execute roadmap initiatives. Visualization & Platform Enablement (15%) - Develop impactful dashboards and data products using Tableau, with an emphasis on usability, scalability, and performance. - Facilitate stakeholder self-service through well-organized data architecture and visualization best practices. - Explore new tools and capabilities, including GenAI for supported analytics. Experience - 5+ years of experience in data analytics, digital analytics, or data engineering, ideally in a marketing or commercial context. - Proficient in SQL, Python, and tools like Databricks, Azure, or AWS. - Demonstrated ability in constructing and managing data lakes, ETL pipelines, and API integrations. - Proficiency in Tableau; familiarity with Tableau Prep is advantageous. - Knowledge of Google Analytics (GA4), GTM, and social media analytics platforms. - Experience working in agile teams, utilizing JIRA for sprint planning and delivery. - Exposure to predictive analytics, modeling, and GenAI applications is beneficial. - Strong communication and presentation skills, capable of leading significant meetings and delivering clear insights to senior stakeholders. - Excellent organizational and project management abilities; adept at handling competing priorities. - Meticulous attention to detail, a sense of ownership, and a collaborative, results-oriented attitude.,

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

maharashtra

On-site

You will be joining our dynamic Engineering team as a skilled Sr. Production Support Engineer at Yubi. Your primary responsibility will involve taking ownership of debugging day-to-day issues, identifying root causes, improving broken processes, and ensuring the smooth operation of our systems. By closely collaborating with cross-functional teams, you will analyze, debug, and enhance system performance, contributing to a more efficient and reliable infrastructure. Your key responsibilities will include incident debugging and resolution, data analysis and query writing, scripting and automation, process improvement, and collaboration. You will investigate and resolve daily production issues, perform root cause analysis, and implement solutions to prevent recurring issues. Additionally, you will write and optimize custom queries for various data systems, analyze system and application logs, develop and maintain custom Python scripts, and create automated solutions to address inefficiencies. To excel in this role, you should have at least 3-5 years of hands-on experience in debugging, Python scripting, and production support in a technical environment. You must be proficient in Python scripting for automation with Pandas, writing and optimizing queries for databases like MySQL, Postgres, MongoDB, and Redshift, and have familiarity with ETL pipelines, APIs, or data integration tools. Ideal candidates will possess exceptional analytical and troubleshooting skills, the ability to identify inefficiencies and implement practical solutions for system reliability and workflows, and excellent verbal and written communication skills for effective cross-functional collaboration and documentation. Exposure to tools like Airflow, Pandas, or NumPy, familiarity with production monitoring tools like New Relic or Datadog, experience with cloud platforms such as AWS, GCP, or Azure, and basic knowledge of CI/CD pipelines will be considered advantageous. This position is based in Mumbai - BKC. Join us at Yubi, where transparency, collaboration, and the power of possibility drive our journey towards global corporate markets with a holistic product suite designed to unleash your potential.,

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

Lead Data Scientist - Healthcare Domain Specialist As a key leader in the data science team at RT Global Infosolutions Pvt Ltd, you will play a pivotal role in defining strategy, leading projects, and collaborating with healthcare professionals, engineers, and product teams to deploy scalable AI solutions. Your primary responsibilities will include designing, developing, and optimizing predictive models for elderly fall risk assessment using advanced machine learning and deep learning techniques. You will work with healthcare-specific data to uncover patterns and actionable insights, ensuring accuracy, reliability, and ethical use of models in predicting fall risks. Collaboration with clinicians, healthcare providers, and cross-functional teams to align AI solutions with clinical workflows and patient care strategies will be essential. Additionally, you will be responsible for developing robust ETL pipelines, continuously evaluating model performance, ensuring compliance with healthcare data regulations, staying updated with the latest research in healthcare AI, and guiding the team in technical problem-solving and day-to-day task management. Presenting insights, models, and business impact assessments to senior leadership and healthcare stakeholders will also be part of your role. **Key Responsibilities** - Design, develop, and optimize predictive models for elderly fall risk assessment using advanced machine learning (ML) and deep learning techniques. - Work with healthcare-specific data to uncover patterns and actionable insights. - Leverage healthcare domain knowledge to ensure accuracy, reliability, and ethical use of models in predicting fall risks. - Collaborate with clinicians, healthcare providers, and cross-functional teams to align AI solutions with clinical workflows and patient care strategies. - Develop robust ETL pipelines to preprocess and integrate healthcare data from multiple sources, ensuring data quality and compliance. - Continuously evaluate model performance and refine algorithms to achieve high accuracy and generalizability. - Ensure compliance with healthcare data regulations such as HIPAA, GDPR, and implement best practices for data privacy and security. - Stay updated with the latest research in healthcare AI, predictive analytics, and elderly care solutions, integrating new techniques as applicable. - Guide all team members in technical and domain-specific problem-solving, manage day-to-day task deliverables, evaluate individuals" performance, and coach. - Present insights, models, and business impact assessments to senior leadership and healthcare stakeholders. **Required Skills & Qualifications** - Master's or PhD in Data Science, Computer Science, Statistics, Bioinformatics, or a related field. A strong academic background in healthcare is preferred. - 8-11 years of experience in data science, with at least 2 years in the healthcare domain. - Ability to work in cross-functional teams. - Ability to publish papers and research findings related to healthcare data science. - Proficiency in Python, R, or other programming languages used for ML and data analysis. - Hands-on experience with ML/DL frameworks (e.g., TensorFlow, PyTorch, Scikit-learn). - Experience with time-series data, wearable/sensor data, or IoT data integration is a plus. - Strong knowledge of statistics, probability, and feature engineering. - Familiarity with cloud platforms (AWS, Azure, GCP) and tools for scalable ML pipelines. - Understanding of geriatric healthcare challenges, fall risks, and predictive care strategies. - Familiarity with Electronic Health Records (EHR), wearable devices, and sensor data. - Knowledge of healthcare data compliance (e.g., HIPAA, GDPR). - Strong analytical and problem-solving abilities. - Excellent communication skills to present findings to non-technical stakeholders. - A collaborative mindset to work with interdisciplinary teams. **Preferred Qualifications** - Knowledge of biomechanics or human movement analysis. - Experience with explainable AI (XAI) and interpretable ML models. Join us at RT Global Infosolutions Pvt Ltd to work on cutting-edge healthcare AI solutions that positively impact elderly lives. We offer a competitive salary and benefits package, a flexible work environment, opportunities for professional growth and leadership, and a collaborative and inclusive culture that values innovation and teamwork.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

ahmedabad, gujarat

On-site

The ideal candidate should have a minimum of 3 years of experience in Data Engineering with proven hands-on experience in ETL pipelines, specifically possessing end-to-end ownership. It is essential for the candidate to exhibit deep expertise in AWS Resources such as EC2, Athena, Lambda, and Step Functions as this is critical to the role. Proficiency in MySQL is non-negotiable. Moreover, experience in Docker including setup, deployment, and troubleshooting is required. Experience with Airflow or any modern orchestration tool, PySpark, Python Ecosystem, SQL Alchemy, DuckDB, PyArrow, Pandas, Numpy, and DLT (Data Load Tool) would be considered advantageous. The successful candidate should be a proactive builder, capable of working independently while maintaining effective communication. Thriving in fast-paced startup environments, you should prioritize ownership and impact over just writing code. Please include the code word "Red Panda" in your application to indicate that you have carefully reviewed this section. In this role, you will be responsible for architecting, constructing, and optimizing robust data pipelines and workflows. You will take ownership of configuring, optimizing, and troubleshooting AWS resources. Collaboration with product and engineering teams will be crucial to deliver quick business impact. The emphasis will be on automating and scaling data processes to eliminate manual work, thereby laying the foundation for making informed business decisions. Only serious and relevant applicants will be considered for this position.,

Posted 2 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

navi mumbai, maharashtra

On-site

As an Analyst, you will play a crucial role in supporting data-driven decision-making within the organization. Your responsibilities will include designing and maintaining dashboards to visualize key metrics, conducting data analysis to uncover trends and insights for business decisions, collaborating with cross-functional teams to understand data needs, and delivering solutions. Additionally, you will work closely with the product, sales, and marketing teams to automate repetitive workflows and data processes, as well as develop and manage ETL pipelines for seamless data extraction, transformation, and loading. To excel in this role, you should possess a Bachelor's or Master's degree in computer science, data science, or a related field, along with proven experience in data analysis. You must have a strong proficiency in developing reports and dashboards using tools such as Metabase and Redash, as well as the ability to write advanced Mongo queries and work proficiently in Excel. Analytical thinking, solid understanding of data mining techniques, and strong communication and collaboration skills are essential for success in this position. It is desirable to have experience working with Python for data processing and pipeline building using libraries such as Pandas, NumPy, and Matplotlib. By joining our team, you will have the opportunity to work on impactful, data-driven projects in a collaborative work environment with growth potential.,

Posted 2 weeks ago

Apply

8.0 - 13.0 years

30 - 40 Lacs

pune

Hybrid

Name of Position: Senior Backend Developer JAVA, ETL & Microservices Company Name: Dataceria Software Solutions Pvt. Ltd. Job Type: Permanent Experience: 8+ Years Location: Pune (Hybrid – 2-3 days in office) Work Timings: USA & UK time zone aligned About the Role Dataceria is seeking highly skilled Senior Backend Developers (Java, Microservices, ETL) to join a strategic modernization project for a global client. This role involves working on complex enterprise applications in a hybrid work model, collaborating with architects, leads, and other senior developers to deliver scalable and robust backend solutions. Key Responsibilities Design, develop, and maintain backend services using Java 8+, Spring Boot, and Hibernate/JPA . Build, deploy, and manage microservices in Kubernetes (AKS or equivalent) . Design and implement ETL pipelines using Apache Airflow, Spring Batch, or Apache Camel . Work with Snowflake for pipeline creation, DB deployment, and query optimization. Integrate messaging systems using Kafka or equivalent enterprise messaging tools. Collaborate with cloud infrastructure teams to deploy and maintain services on Azure Cloud (AWS/GCP acceptable if willing to cross-skill). Develop and manage RESTful APIs for microservices communication. Contribute to CI/CD pipeline setup , deployment automation, and Git-based version control. Collaborate in Agile Scrum teams , participating in sprint planning, reviews, and retrospectives. Troubleshoot, debug, and resolve production issues efficiently. What We’re Looking For Must-Have Skills: 8+ years in software development with a strong backend coding foundation (Java 8+, J2EE, Spring Boot, Hibernate/JPA) . Expertise in Microservices architecture, REST APIs, JSON . Hands-on experience with ETL tools – Apache Airflow, Spring Batch, or Apache Camel. Strong SQL knowledge ( MS-SQL, PostgreSQL ) plus Snowflake database expertise. Proficiency with Azure Cloud, Kubernetes (AKS), Docker . Messaging systems: Kafka (or equivalent). Experience in CI/CD setup and troubleshooting (preferably Azure DevOps). Strong leadership, problem-solving, and communication skills. Ability to work in fast-paced, global environments with flexible hours when required. Preferred Skills: Exposure to UI technologies (ReactJS, JavaScript, HTML5, CSS3). Experience in Financial/Banking domains . Familiarity with Maven, Gradle, Git, Terraform (IaC) . DB performance tuning, Infrastructure-as-Code. Tools knowledge: Control-M, Dynatrace, ServiceNow . Strong Unix/Linux scripting and command-line expertise. Role Scope & Expectations Work across legacy and modernization projects within microservices-based architectures. Assigned to scrum teams in Pune , with potential module rotations. Contribute actively to architecture discussions and solution design . High priority on Java + Microservices + Cloud (Azure/Kubernetes) with Snowflake & ETL expertise as must-haves. Work closely with global teams (APAC/EMEA/US) ; calls often scheduled in India evenings . Candidate Fit – Priorities Strong backend developer with proven microservices and cloud-native deployment experience. Snowflake & ETL expertise is mandatory – minimal learning curve acceptable. Deal-breakers: Weak backend fundamentals or lack of cloud-native deployment knowledge.

Posted 2 weeks ago

Apply

6.0 - 11.0 years

5 - 9 Lacs

bengaluru

Work from Office

Were currently looking for: We are looking for a detail-oriented Data Analyst to join our team. In this role, you will be responsible for optimizing and improving our ETL pipelines, generating new datasets, creating product metrics, and collaborating across teams to solve complex problems. This role requires a keen understanding of system behavior, strong technical skills, and the ability to engage with cross-functional teams to drive product improvements and solve complex problems. To succeed in this role you must have experience in: Responsibilities: Continuous Improvement and Optimization of ETL Pipelines: Work on the constant improvement of ETL pipelines to ensure reliability, consistency, and performance Elaborate new data entities for the reporting development environment. Apply strong technical skills in data engineering, using SQL to design, implement, and optimize data pipelines. Generating New Data for Cases (Understanding System Behavior): Gain in-depth knowledge of system behavior and processes. Generate data to support case analysis, ensuring accuracy and relevance. Show a willingness to dive into areas outside of immediate responsibility to understand all aspects of the system. Product Metrics Elaboration and Consistency Validation: Create a comprehensive set of product metrics in alignment with product requirements. Validate the consistency and accuracy of metrics to ensure they meet established standards. Conduct Scheduled Releases: Manage and execute scheduled product releases with a high level of discipline and attention to detail. Root Cause Analysis and Requirements Creation: Investigate issues, identify their root causes, and collaborate with relevant teams to resolve them. Demonstrate a strong desire to understand the essence of the problem and provide meaningful insights. Manage Tasks and Cross-Team Communication: Collaborate effectively with cross-functional teams to manage and resolve product-related tasks. Engage stakeholders to drive alignment and ensure that issues are addressed in a timely manner. Set tasks, take accurate notes, and track progress, ensuring proper communication flow. Required Skills & Qualifications: Educational Background: Bachelors degree in Statistics, Data Science, Computer Science, or a related field. 6+ years of experience in data engineering, with a focus on developing and optimizing data pipelines using SQL. Proven experience with data visualization tools such as Tableau, Power BI, or Looker. Ability to understand system behavior in-depth and generate data to support case analysis. Strong attention to detail with the ability to structure and validate product metrics. Ability to conduct scheduled releases with discipline. Strong problem-solving skills, with a desire to get to the root cause of issues and provide clear solutions. Excellent cross-team communication skills, including task management, stakeholder engagement, and effective note-taking. Familiarity with agile processes and cross-team collaboration. What we offer: Mediclaim benefits Paid holidays Casual/Sick leave Privilege leave Bereavement leave Maternity & Paternity leave Wellness programs & coaching Employee referral bonus Professional development allowances Night shift allowances

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Ready to build the future with AI At Genpact, we don&rsquot just keep up with technology&mdashwe set the pace. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos AI Gigafactory, our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what&rsquos possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Senior Manager, Pharma Commercial Analytics, Lifesciences & Healthcare In this role, the candidate should have hands on experience working on commercial projects across the pharma / life sciences, including claims, EMR, EHR data experience, strategies/roadmaps and ensuring timely delivery to meet business needs. Responsibilities . Experienced across any of the commercial analytics functions like patient analytics, Claims, EMR, EHR data analytics . Understand the incoming datasets (claims, EHR, lab, registry, etc.) and identify the available patient-identifiable fields. . Map source data fields to the required Datavant tokenization . Set up and maintain the Datavant tokenization tool/environment (on-prem or cloud). . Apply the correct tokenization profiles (e.g., Profile 1, Profile 2) based on use case and compliance requirements. . Handle multiple datasets and synchronize tokenization for cross-dataset linkage . Troubleshoot low match performance and improve via data cleansing or profile adjustments. . Develop ETL pipelines or scripts to automate tokenization workflows. . Create reusable mapping templates for profiles. . Track and report on processing time, match rates, and error rates . Experienced in pharma data sets such as, Xponent, PlanTrak, NPA, DDD, LAAD, Symphony claims, Speciality data assets, Datavant etc. . Experience in requirement gathering scoping, solution, project management and executing multiple projects in parallel . Liaise with Datavant support for technical issues and feature updates. . Able to manage a large team and act as liaison to the onshore team Qualifications we seek in you! Minimum Qualifications . Bachelor&rsquos in technology/pharmacy Preferred Qualifications/ Skills . Pharma domain. Experience with different data assets, sales reporting/commercial analytics team in healthcare/pharma/life science would be a plus . Hand on experience in SQL, Data Bricks, Snowflakes, R, Python . Overall, the candidate should have problem solving, macro-level research and analytics approach and good in numbers. . Good Excel/PowerPoint skills . Good project management and problem-solving skills . Effective communication and interpersonal skills Why join Genpact . Lead AI-first transformation - Build and scale AI solutions that redefine industries . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career&mdashGain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills . Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace . Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything we build . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies