Jobs
Interviews

6639 Databricks Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Description: Senior Data Developer I Location: Gurugram, India Employment Type: Full-Time Experience Level: Mid to Senior-Level Department: Data & Analytics / IT Job Summary We are seeking an experienced Data Developer with expertise in Microsoft Fabric, Azure Synapse Analytics, Databricks, and strong SQL development skills. The ideal candidate will work on end-to-end data solutions supporting analytics initiatives across clinical, regulatory, and commercial domains in the Life Sciences industry. Familiarity with Azure DevOps, and relevant certifications such as DP-700 and Databricks Data Engineer Associate/Professional are preferred. Power BI knowledge is highly preferable to support integrated analytics and reporting. Key Responsibilities Design, develop, and maintain scalable and secure data pipelines using Microsoft Fabric, Azure Synapse Analytics, and Azure Databricks to support critical business processes. Develop curated datasets for clinical, regulatory, and commercial analytics using SQL and PySpark. Create and support dashboards and reports using Power BI (highly preferred). Collaborate with cross-functional stakeholders to understand data needs and translate them into technical solutions. Work closely with ERP teams such as Salesforce.com and SAP S/4HANA to integrate and transform business-critical data into analytic-ready formats. Partner with Data Scientists to enable advanced analytics and machine learning initiatives by providing clean, reliable, and well-structured data. Ensure data quality, lineage, and documentation in accordance with GxP, 21 CFR Part 11, and industry best practices. Use Azure DevOps to manage code repositories, track tasks, and support agile delivery processes. Monitor, troubleshoot, and optimize data workflows for reliability and performance. Contribute to the design of scalable, compliant data models and architecture. Required Qualifications Bachelor’s or Master’s degree in Computer Science. 5+ years of experience in data development or data engineering roles. Hands-on Experience With Microsoft Fabric (Lakehouse, Pipelines, Dataflows) Azure Synapse Analytics (Dedicated/Serverless SQL Pools, Pipelines) Experience with Azure Data Factory, Apache Spark Azure Databricks (Notebooks, Delta Lake, Unity Catalog) SQL (complex queries, optimization, transformation logic) Familiarity with Azure DevOps (Repos, Pipelines, Boards). Understanding of data governance, security, and compliance in the Life Sciences domain. Certifications (Preferred) Microsoft Certified: DP-700 – Fabric Analytics Engineer Associate Databricks Certified Data Engineer Associate or Professional Preferred Skills Preferred Skills: Strong knowledge of Power BI (highly preferred) Familiarity with HIPAA, GxP, and 21 CFR Part 11 compliance Experience working with ERP data from Salesforce.com and SAP S/4HANA Exposure to clinical trial, regulatory submission, or quality management data Good understanding of AI and ML concepts Experience working with APIs Excellent communication skills and the ability to collaborate across global teams Location - Gurugram Mode - Hybrid

Posted 2 days ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Hello! You've landed on this page, which means you're interested in working with us. Let's take a sneak peek at what it's like to work at Innovaccer. Analytics at Innovaccer Our analytics team is dedicated to weaving analytics and data science magics across our products. They are the owners and custodians of intelligence behind our products. With their expertise and innovative approach, they play a crucial role in building various analytical models (including descriptive, predictive, and prescriptive) to help our end-users make smart decisions. Their focus on continuous improvement and cutting-edge methodologies ensures that they're always creating market leading solutions that propel our products to new heights of success. About The Role Data is the foundation of our innovation. We are seeking a Manager, Data Science with expertise in NLP and Generative AI to lead the development of cutting-edge AI-driven solutions in healthcare. This role requires a deep understanding of healthcare data and the ability to design and implement advanced language models that extract insights, automate workflows, and enhance clinical decision-making. We're looking for a visionary leader who can define and build the next generation of AI-driven tools, leveraging LLMs, deep learning, and predictive analytics to personalize care based on patients' clinical and behavioral history. If you're passionate about pushing the boundaries of AI in healthcare, we'd love to hear from you! A Day in the Life Team Leadership & Development: Build, mentor, and manage a team of data scientists, and machine learning engineers. Foster a culture of collaboration, innovation, and technical excellence Roadmap Execution: Define and execute on the quarterly AI/ML roadmap, setting clear goals, priorities, and deliverables for the team. Work with the business leaders and customers to understand their pain-points and build large-scale solutions for them. Define technical architecture to productize Innovaccer's machine-learning algorithms and take them to market with partnerships with different organization Work with our data platform and applications team to help them successfully integrate the data science capability or algorithms in their product/workflows Project & Stakeholder Management: Work closely with cross-functional teams, including product managers, engineers, and business leaders, to align AI/ML initiatives with company objectives What You Need Masters in Computer Science, Computer Engineering or other relevant fields (PhD Preferred) 8+ years of experience in Data Science (healthcare experience will be a plus) Strong experience with deep learning techniques to build NLP/Computer vision models as well as state of art GenAI pipelines - Has demonstrable experience deploying deep learning models in production at scale with interactive improvements- would require hands-on expertise with at least 1 deep learning frameworks like Pytorch or Tensorflow Strong hands-on experience in building GenAI applications - building LLM based workflows along with optimization techniques - knowledge of implementing agentic workflows is a plus Has keen interest in research and stays updated with key advancements in the area of AI and ML in the industry. Having patents/publications in any area of AI/ML is a great add on Hands on experience with at least one ML platform among Databricks, Azure ML, Sagemaker s Strong written and spoken communication skills We offer competitive benefits to set you up for success in and outside of work. Here's What We Offer Generous Leave Benefits: Enjoy generous leave benefits of up to 40 days Parental Leave: Experience one of the industry's best parental leave policies to spend time with your new addition Sabbatical Leave Policy: Want to focus on skill development, pursue an academic career, or just take a break? We've got you covered Health Insurance: We offer health benefits and insurance to you and your family for medically related expenses related to illness, disease, or injury Pet-Friendly Office*: Spend more time with your treasured friends, even when you're away from home. Bring your furry friends with you to the office and let your colleagues become their friends, too. *Noida office only Creche Facility for children*: Say goodbye to worries and hello to a convenient and reliable creche facility that puts your child's well-being first. *India offices Where And How we work Our Noida office is situated in a posh techspace, equipped with various amenities to support our work environment. Here, we follow a five-day work schedule, allowing us to efficiently carry out our tasks and collaborate effectively within our team. Innovaccer is an equal-opportunity employer. We celebrate diversity, and we are committed to fostering an inclusive and diverse workplace where all employees, regardless of race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability, age, marital status, or veteran status, feel valued and empowered. Disclaimer : Innovaccer does not charge fees or require payment from individuals or agencies for securing employment with us. We do not guarantee job spots or engage in any financial transactions related to employment. If you encounter any posts or requests asking for payment or personal information, we strongly advise you to report them immediately to our HR department at px@innovaccer.com. Additionally, please exercise caution and verify the authenticity of any requests before disclosing personal and confidential information, including bank account details.

Posted 2 days ago

Apply

6.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Job Description As a Sr Advanced Data Analyst, you will play a crucial role in providing expertise in delivering data-driven insights and analytics to optimize operational efficiency and improve decision-making within the organization. You will work with cross-functional teams and guide them to address business demand and develop thoughtful solutions. Your ability to translate complex data into actionable insights will be key in ensuring data integrity and accuracy at Advanced Materials. You will report directly to our Director GenAI, Data, Analytics and you'll work out of our Bangalore location on a hybrid work schedule. In this role, you will impact the organization by leveraging your advanced data analysis skills to provide valuable insights and recommendations that drive business growth and improve operational efficiency. Key Responsibilities Develop and implement data analytics strategies to drive continuous improvement Generate a data driven culture, providing users with easy ways to access and consume data enabling business users to understand our team’s capabilities to enhance collaboration Collaborate with stakeholders to assess current capabilities and needs to identify areas of opportunity, proposing analytics solutions that align with core strategy and operation. Ensure data integrity and accuracy through data analysis and validation. Translate complex data into actionable insights to facilitate decision-making Utilize analytical and technical skills to translate and analyze business needs into requirements, leading the design and implementation of complex analytics and governance solutions. Design, develop, and deploy information products, supporting visualization and data accessibility in a user-friendly, customer centric manner. Design and support data models optimized to support analytical tools such as Tableau, and PowerBI and create BI reports. Perform complex data analysis and generate insights from various data sources to support decision-making. Train users in extracting, interpreting, and applying insights from information products. Contribute to the development and execution of the organization's data strategy. Qualifications YOU MUST HAVE 6+ years of relevant experience in Data Visualization (PowerBI as main BI tool, Tableau, Looker, Qlik, are nice to have), Data analysis or related technical activities. Basic data normalization and data modeling techniques Prototyping abilities in Excel, PPT, or any other tool to gather and align requirements. User experience mindset, driving the design of visualizations and infographics to distill complex information. Intermediate to Advanced Power BI DAX experience Expert in scripting and querying languages such as Python, SQL, and others for data consumption, manipulation and advanced analytics Proficiency in creating visualizations using Python or similar languages. Understanding of Descriptive Statistics and proficiency in hypothesis testing. Understanding of master and transactional data in SAP, SFDC (SalesForce) and associated IT technologies. Experience in data warehouse tools such as Snowflake, Databricks or equivalent. Knowledge of Agile development methodology Excellent communication skills (verbal, written, and presentation). Passion for data and its potential to drive business impact. Work independently and free from direct oversight. WE VALUE Bachelor's degree in a relevant field (e.g., Data Science, Analytics, Engineering, etc.) Strong leadership skills and the ability to effectively influence and coach others Proven track record of driving data-driven decision-making and delivering measurable business results Experience in advanced data analysis techniques (e.g., machine learning, predictive modeling, etc.) Experience with data governance and data quality initiatives Excellent communication, presentation, problem solving, and interpersonal communication skills. PL/SQL, T/SQL, NoSQL programming, and database objects. Understanding of different analytics trends like predictive models and AI/ML. Highly proficient in Microsoft Excel. As an extra, Microsoft Word and PowerPoint Strong business acumen with a data-mining mindset. Experience with large-scale data analytics and governance initiatives. Ability to collaborate and influence across different levels. Excellent communication skills combined with the ability to navigate a highly matrixed organization. Critical thinking and understanding of business processes, technology, systems, and tools. Demonstrated initiative and resourcefulness; a self-starter able to work independently. Demonstrated record of on-time and on-budget project and/or program management execution. Proactive, detail-oriented, with high regard for quality and consistency of communication, and able to present concise plans and ideas to executive level in a diverse work environment.

Posted 2 days ago

Apply

10.0 years

0 Lacs

India

Remote

This is a contract role from August 2025-March 2026, fully remote in India. Required Skills & Experience -10+ years overall Data Science experience -2+ years as Lead Data Scientist communicating with stakeholders, participating in roadmap and design review, and team guidance -Proven experience in delivering successful IBP demand planning forecasting models for enterprise level organizations – i.e. end to end testing, data validations and data quality, model adoption, etc. -Expertise in Python and Pyspark -Experience with various algorithms including, but not limited to: Forecasting, Econometrics, and Curve fitting methods --> Demand Forecasting is most important -3+ years development role, ML Model implementation, experimentation & related software engineering focused -Experience working in Azure ecosystem: ADF, ADLS, Databricks, Blob, etc. -ADO board experience -Strong SQL background -Expertise in CICD -Excellent problem-solving and analytical skills; strong communication and teamwork abilities; ability to work in a fast-paced and dynamic environment -Excellent written and verbal communication -Ability to work in fast paced, global environment -Bachelor’s or Master’s degree in Computer Science, Engineering, or a relate Nice to Have Skills & Experience -Industry experience in CPG, F&B, or FMCB Job Description A Fortune 50 client is looking for a Lead Data Scientist Engineer to come join their team in support of their Integrated Business Planning program for their European sector. As the Lead Data Scientist, you will be working on this client’s large Integration Business Planning program in helping them change their data hierarchy by decreasing their 13k demand forecasting models to 7-8k in an effort to create less models but more efficient models. This role will require someone who understands IBP demand planning forecasting who can help work with the Forecasting Analyst and business stakeholders in creating the roadmap and guide the delivery of the mapping the old existing data to the new data models. You will be hands on in leading the efforts in end-to-end data testing, model adoption, help guide the team to delivery, be hands on in data validation, data quality checks, understanding the machine learning engine (model selection logic), and provide mentorship to other team members. We are looking for someone who can help with making business recommendations and create solutions for problems that arise are key to success. An ideal candidate will have a passion for solving problems, working with business stakeholders, and be comfortable working in a smaller team, within a global organization. This role allows candidates to be remote in India, however you must be available to work either 10am-7pm IST or 11am – 8pm IST.

Posted 2 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team is comprised of many talented individuals all working together with cutting-edge technology to build the best airline in the history of aviation. Our team designs, develops and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Job Overview And Responsibilities United Airlines is seeking talented people to join the Data Engineering Operations team. Key responsibilities include configuring and managing infrastructure, implementing continuous integration/continuous deployment (CI/CD) pipelines, and optimizing system performance. You will work to improve efficiency, enhance scalability, and ensure the reliability of systems through monitoring and proactive measures. Collaboration, scripting, and proficiency in tools for version control and automation are critical skills for success in this role. We are seeking creative, driven, detail-oriented individuals who enjoy tackling tough problems with data and insights. Individuals who have a natural curiosity and desire to solve problems are encouraged to apply . Collaboration, scripting, and proficiency in tools for version control and automation are critical skills for success in this role. Translate product strategy and requirements into suitable, maintainable and scalable solution design according to existing architecture guardrails Collaborate with development and operations teams to understand project requirements and design effective DevOps solutions Implement and maintain CI/CD pipelines for automated software builds, testing, and deployment Manage and optimize cloud-based infrastructure to ensure scalability, security, and performance Implement and maintain monitoring and alerting systems for proactive issue resolution Work closely with cross-functional teams to troubleshoot and resolve infrastructure-related issues Automate repetitive tasks and processes to improve efficiency and reduce manual intervention Key Responsibilities Design, deploy, and maintain cloud infrastructure on AWS. Set up and manage Kubernetes clusters for container orchestration. Design, implement, and manage scalable, secure, and highly available AWS infrastructure using Terraform. Develop and manage Infrastructure as Code (IaC) modules and reusable components. Collaborate with developers, architects, and other DevOps engineers to design cloud-native applications and deployment strategies. Manage and optimize CI/CD pipelines using tools like GitHub Actions, GitLab CI, Jenkins, or similar. Manage and optimize Databricks platform. Monitor infrastructure health and performance using AWS CloudWatch, Prometheus, Grafana, etc. Ensure cloud security best practices, including IAM policies, VPC configurations, data encryption, and secrets management. Create and manage networking infrastructure such as VPCs, subnets, security groups, route tables, NAT gateways, etc. Handle deployment and configuration of services such as EC2, RDS, Glue, S3, ECS/EKS, Lambda, API Gateway, Kinesis, MWAA, DynamoDB, CloudFront, Route 53, SQS,SNS, Athena, ELB/ALB. Maintain logging, alerting, and monitoring systems to ensure reliability and performance. This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications What’s needed to succeed (Minimum Qualifications): Bachelor's degree in Computer Science, Engineering, or related field 5+ years of IT experience in Experience as a DevOps Engineer or in a similar role. Experience with AWS infrastructure designs, implementation, and support Proficiency in scripting languages (e.g., Bash, Python) and configuration management tools Experience with database systems like Postgress, Redshift, Mysql. Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Master’s in computer science or related STEM field Strong experience with continuous integration & delivery using Agile methodologies DevOps experience with transportation/airline industry Knowledge of security best practices in a DevOps environment Experience with logging and monitoring tools (e.g., Dynatrace / Datadog ) Strong problem-solving and communication skills Experience with Harness tools Experience with microservices architecture and serverless applications. Knowledge of database technologies (PostgreSQL, Redshift,Mysql). Knowledge of security best practices in a DevOps environment AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified Developer). Databricks Platform certifications.

Posted 2 days ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Position Summary: This role is accountable for running day-to-day operations of the Data Platform in Azure / AWS Databricks.The role involves designing and implementing data ingestion pipelines from multiple sources using Azure Databricks, ensuring seamless and efficient pipeline executions, and adhering to security, regulatory, and audit control guidelines. Key Responsibilities: ● Design and implement data ingestion pipelines from multiple sources using Azure Databricks. ● Ensure data pipelines run smoothly and efficiently with minimal downtime. ● Develop scalable and reusable frameworks for ingesting large and complex datasets. ● Integrate end-to-end data pipelines, ensuring quality and consistency from source systems to target repositories. ● Work with event-based and streaming technologies to ingest and process data in real-time. ● Collaborate with other project team members to deliver additional components such as API interfaces and search functionalities. ● Evaluate the performance and applicability of various tools against customer requirements and provide recommendations. ● Provide technical advice to the team and assist in issue resolution, leveraging strong Cloud and Databricks knowledge. ● Provide on-call, after-hours, and weekend support as needed to maintain platform stability. ● Fulfil service requests related to the Data Analytics platform efficiently. ● Lead and drive optimisation and continuous improvement initiatives within the team. ● Conduct technical reviews of changes as part of release management, acting as a gatekeeper for production deployments. ● Adhere to data security standards and implement required controls within the platform. ● Lead the design, development, and deployment of advanced data pipelines and analytical workflows on the Databricks Lakehouse platform. ● Collaborate with data scientists, engineers, and business stakeholders to build and scale end-to-end data solutions. ● Own architectural decisions to ensure alignment with data governance, security, and compliance requirements. ● Mentor and guide a team of data engineers, providing technical leadership and supporting career development. ● Implement CI/CD practices for data engineering pipelines using tools like Azure DevOps, GitHub Actions, or Jenkins. Qualifications and Experience: ● Bachelor’s degree in IT, Computer Science, Software Engineering, Business Analytics, or equivalent ● Minimum of 7+ years of experience in the data analytics field. ● Proven experience with Azure/AWS Databricks in building and optimising data pipelines, ● architectures, and datasets. ● Strong expertise in Scala or Python, PySpark, and SQL for data engineering tasks. ● Ability to troubleshoot and optimize complex queries on the Spark platform. ● Knowledge of structured and unstructured data design, modelling, access, and storage techniques. ● Experience designing and deploying data applications on cloud platforms such as Azure or AWS. ● Hands-on experience in performance tuning and optimising code running in Databricks environment ● Strong analytical and problem-solving skills, particularly within Big Data environments. ● Experience with Big Data management tools and technologies including Cloudera, Python, Hive, Scala, Data Warehouse, Data Lake, AWS, Azure. Technical and Professional Skills: Must Have: ● Excellent communication skills with the ability to interact directly with customers. ● Azure/AWS Databricks. ● Python / Scala / Spark / PySpark. ● Strong SQL and RDBMS expertise. ● HIVE / HBase / Impala / Parquet. ● Sqoop, Kafka, Flume. ● Airflow. ● Jenkins or Bamboo. ● Github or Bitbucket. ● Nexus. Good to Have: ● Relevant accredited certifications for Azure, AWS, Cloud Engineering, and/or Databricks. ● Knowledge of Delta Live Tables (DLT).

Posted 2 days ago

Apply

5.0 years

0 Lacs

India

On-site

Requirements Bachelor's or Master's degree in Data Science/AI and work experience or PhD in relevant area 5+ Years Experience in Data Science , Machine learning use cases especially in Business areas - Sales, Marketing , Customer Success etc. Hands-on experience operating in Jupyter notebooks or Databricks or Snowflake or AWS Sagemaker ( one of them ) is MUST . Strong experience in writing, analyzing, and troubleshooting SQL. Independent thinkers and doers and not order takers. Experience with operationalizing Data science models in production environments and CI-CD is a plus. Excellent written and verbal communication and interpersonal skills, able to effectively collaborate with technical and business partners Should be able to work in Agile methodology, develop stories, attend stand up

Posted 2 days ago

Apply

0.0 - 2.0 years

3 - 10 Lacs

Niranjanpur, Indore, Madhya Pradesh

Remote

Job Title - Sr. Data Engineer Experience - 2+ Years Location - Indpre (onsite) Industry - IT Job Type - Full ime Roles and Responsibilities- 1. Design and develop scalable data pipelines and workflows for data ingestion, transformation, and integration. 2. Build and maintain data storage systems, including data warehouses, data lakes, and relational databases. 3. Ensure data accuracy, integrity, and consistency through validation and quality assurance processes. 4. Collaborate with data scientists, analysts, and business teams to understand data needs and deliver tailored solutions. 5. Optimize database performance and manage large-scale datasets for efficient processing. 6. Leverage cloud platforms (AWS, Azure, or GCP) and big data technologies (Hadoop, Spark, Kafka) for building robust data solutions. 7. Automate and monitor data workflows using orchestration frameworks such as Apache Airflow. 8. Implement and enforce data governance policies to ensure compliance and data security. 9. Troubleshoot and resolve data-related issues to maintain seamless operations. 10. Stay updated on emerging tools, technologies, and trends in data engineering. Skills and Knowledge- 1. Core Skills: ● Proficient in Python (libraries: Pandas, NumPy) and SQL. ● Knowledge of data modeling techniques, including: ○ Entity-Relationship (ER) Diagrams ○ Dimensional Modeling ○ Data Normalization ● Familiarity with ETL processes and tools like: ○ Azure Data Factory (ADF) ○ SSIS (SQL Server Integration Services) 2. Cloud Expertise: ● AWS Services: Glue, Redshift, Lambda, EKS, RDS, Athena ● Azure Services: Databricks, Key Vault, ADLS Gen2, ADF, Azure SQL ● Snowflake 3. Big Data and Workflow Automation: ● Hands-on experience with big data technologies like Hadoop, Spark, and Kafka. ● Experience with workflow automation tools like Apache Airflow (or similar). Qualifications and Requirements- ● Education: ○ Bachelor’s degree (or equivalent) in Computer Science, Information Technology, Engineering, or a related field. ● Experience: ○ Freshers with strong understanding, internships and relevant academic projects are welcome. ○ 2+ years of experience working with Python, SQL, and data integration or visualization tools is preferred. ● Other Skills: ○ Strong communication skills, especially the ability to explain technical concepts to non-technical stakeholders. ○ Ability to work in a dynamic, research-oriented team with concurrent projects. Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,000,000.00 per year Benefits: Paid sick time Provident Fund Work from home Schedule: Day shift Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Niranjanpur, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Engineer: 2 years (Preferred) Work Location: In person Application Deadline: 31/08/2025

Posted 2 days ago

Apply

4.5 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Roles & Responsibilities Key Responsibilities: Develop robust, scalable Python-based applications aligned with company requirements. Integrate and implement Generative AI models into business applications. Design, build, and maintain data pipelines and data engineering solutions on Azure. Collaborate closely with cross-functional teams (data scientists, product managers, data engineers, and cloud architects) to define, design, and deploy innovative AI and data solutions. Build, test, and optimize AI pipelines, ensuring seamless integration with Azure-based data systems. Continuously research and evaluate new AI and Azure data technologies and trends to enhance system capabilities. Participate actively in code reviews, troubleshooting, debugging, and documentation. Ensure high standards of code quality, performance, security, and reliability. Required Skills Advanced proficiency in Python programming, including libraries and frameworks like Django, Flask, FastAPI. Experience in Generative AI technologies (e.g., GPT models, LangChain, Hugging Face). Solid expertise in Azure Data Engineering tools such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Data Lake Storage. Familiarity with AI/ML libraries such as TensorFlow, PyTorch, or OpenAI API. Experience with RESTful APIs, microservices architecture, and web application development. Strong understanding of databases (SQL, NoSQL) and ETL processes. Good knowledge of containerization and orchestration technologies like Docker, Kubernetes. Strong problem-solving, analytical, and debugging skills. Preferred Qualifications Bachelor's/master’s degree in computer science, Engineering, or related fields. Prior experience developing AI-enabled products or implementing AI into applications. Azure certifications (AZ-204, DP-203, AI-102) or equivalent. Exposure to DevOps practices and CI/CD pipelines, especially in Azure DevOps. Soft Skills Strong communication and teamwork skills. Ability to work independently and proactively. Passion for continuous learning and professional growth. Location: Gurgaon, Noida, Pune, Bengaluru, Kochi Experience 4.5-6 Years Skills Primary Skill: Data Engineering Sub Skill(s): Data Engineering Additional Skill(s): Python, Azure Data Factory, Python-Django About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.

Posted 2 days ago

Apply

0 years

0 Lacs

India

On-site

Position Overview: We are seeking a highly skilled Full Stack Developer to join our dynamic team within Global Trading at Client. Required Skills and Qualifications: Technical Proficiency: Expert Front-end React Framework & Backend Python Experience Proficient in front-end technologies such as HTML, CSS, Strong back-end development skills, or similar languages. Proficient GIT, & CI/CD experience. Develop and maintain web applications using modern frameworks and technologies Help maintain code quality, organization, and automation Experience with relational database management systems. Familiarity with cloud services (AWS, Azure, or Google Cloud – Primarily Azure). Understanding of market data, trading systems, and financial instruments related to oil and gas. Preferred Qualifications: • Certifications in relevant technologies or methodologies. • Proven experience in building, operating, and supporting robust and performant databases and data pipelines. • Experience with Databricks and Snowflake • Solid understanding of web performance optimization, security, and best practices Experience supporting PowerBI dashboards

Posted 2 days ago

Apply

13.0 years

0 Lacs

Puducherry

On-site

Job Title: Data Engineer Job Description : We are looking for a highly experienced Data Engineer with over 13 years of expertise in data platforms, database migrations, and enterprise-level projects. The ideal candidate will have a proven track record in handling large-scale data engineering solutions and should possess strong communication skills to work effectively with clients and internal teams. Key Responsibilities : ● Design, develop, and optimize scalable data pipelines and architectures ● Lead data migration projects across platforms and systems ● Work closely with stakeholders to gather requirements and deliver enterprise-grade data solutions ● Manage large datasets ensuring data quality, security, and performance ● Provide technical mentorship and guidance to junior engineers Required Skills : ✔ Strong command of databases, SQL, and data warehousing concepts ✔ Hands-on experience with Snowflake or Databricks (either is mandatory) ✔ Proven track record in enterprise data projects and client interaction ✔ Expertise in ETL design, data modeling, and large-scale data processing ✔ Excellent communication skills Job Type: Contractual / Temporary Work Location: In person

Posted 2 days ago

Apply

10.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description For Lead Data Engineer QA Rank – Manager Location – Bengaluru/Chennai/Kerela/Kolkata Objectives and Purpose The Lead Data Engineer QA will be responsible for testing business intelligence and data warehouse solutions, both in on-premises and cloud platforms. We are seeking an innovative and talented individual who can create test plans, protocols, and procedures for new software. In addition, you will be supporting build of large-scale data architectures that provide information to downstream systems and business users. Your Key Responsibilities Design and execute manual and automatic test cases, including validating alignment with ELT data integrity and compliance. Support conducting QA test case designs, including identifying opportunities for test automation and developing scripts for automatic processes as needed. Follow quality standards, conduct continuous monitoring and improvement, and manage test cases, test data, and defect processes using a risk-based approach as needed. Ensure all software releases meet regulatory standards, including requirements for validation, documentation, and traceability, with particular emphasis on data privacy and adherence to infrastructure security best practices. Proactively foster strong partnerships across teams and stakeholders to ensure alignment with quality requirements and address any challenges. Implement observability within testing processes to proactively identify, track, and resolve quality issues, contributing to sustained high-quality performance. Establish methodology to test effectiveness of BI and DWH projects, ELT reports, integration, manual and automation functionality Work closely with product team to monitor data quality, integrity, and security throughout the product lifecycle, implementing data quality checks to ensure accuracy, completeness, and consistency. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Establish methodology to test effectiveness of BI and DWH projects, ELT reports, integration, manual and automation functionality. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Warehousing, or related field 10+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Understanding of project and test lifecycle, including exposure to CMMi and process improvement frameworks Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Understanding of business intelligence concepts, ETL processing, dashboards, and analytics Testing experience in Data Quality, ETL, OLAP, or Reports Knowledge in Data Transformation Projects, including database design concepts & white box testing Experience in cloud based data solution – AWS/Azure Demonstrated understanding and experience using: Cloud-based data solutions (AWS, IICS, Databricks) GXP and regulatory and risk compliance Cloud AWS infrastructure testing Python data processing SQL scripting Test processes (e.g., ELT testing, SDLC) Power BI/Tableau Script (e.g., perl and shell) Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Test Management and Defect Management tools (e.g., HP ALM) Cloud platform deployment and tools (e.g., Kubernetes) DevOps and continuous integration Databricks/ETL Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 2 days ago

Apply

10.0 years

0 Lacs

Telangana

On-site

About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Reporting to the VP COG ECM enterprise Forms Portfolio Delivery Manager, this role will be responsible for managing and supporting Implementation of a new Document solution for identified applications with the CCM landscape, in APAC. OpenText xPression and Duckcreek has been the corporate document generation tool of choice within Chubb. But xPression going end of life and be unsupported from 2025. A new Customer Communications Management (CCM) platform – Quadient Inspire - has been selected to replace xPression by a global working group and implementation of this new tool (including migration of existing forms/templates from xPression where applicable). Apart from migrating from xPression, there are multiple existing applications to be replaced with Quadient Inspire The role is based in Hyderabad/India with some travel to other Chubb offices. Although there are no direct line management responsibilities within this role, the successful applicant will be responsible for task management of Business Analysts and an Onshore/Offshore development team. The role will require the ability to manage multiple project/enhancement streams with a variety of levels of technical/functional scope and across a number of different technologies. In this role, you will: Lead the design and development of comprehensive data engineering frameworks and patterns. Establish engineering design standards and guidelines for the creation, usage, and maintenance of data across COG (Chubb overseas general) Derive innovation and build highly scalable real-time data pipelines and data platforms to support the business needs. Act as mentor and lead for the data engineering organization that is business-focused, proactive, and resilient. Promote data governance and master/reference data management as a strategic discipline. Implement strategies to monitor the effectiveness of data management. Be an engineering leader and coach data engineers and be an active member of the data leadership team. Evaluate emerging data technologies and determine their business benefits and impact on the future-state data platform. Develop and promote a strong data management framework, emphasizing data quality, governance, and compliance with regulatory requirements Collaborate with Data Modelers to create data models (conceptual, logical, and physical) Architect meta-data management processes to ensure data lineage, data definitions, and ownership are well-documented and understood Collaborate closely with business leaders, IT teams, and external partners to understand data requirements and ensure alignment with strategic goals Act as a primary point of contact for data engineering discussions and inquiries from various stakeholders Lead the implementation of data architectures on cloud platforms (AWS, Azure, Google Cloud) to improve efficiency and scalability Qualifications Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or a related field; Master’s degree preferred Minimum of 10 years’ experience in data architecture or data engineering roles, with a significant focus in P&C insurance domains preferred. Proven track record of successful implementation of data architecture within large-scale transformation programs or projects Comprehensive knowledge of data modelling techniques and methodologies, including data normalization and denormalization practices Hands on expertise across a wide variety of database (Azure SQL, MongoDB, Cosmos), data transformation (Informatica IICS, Databricks), change data capture and data streaming (Apache Kafka, Apache Flink) technologies Proven Expertise with data warehousing concepts, ETL processes, and data integration tools (e.g., Informatica, Databricks, Talend, Apache Nifi) Experience with cloud-based data architectures and platforms (e.g., AWS Redshift, Google BigQuery, Snowflake, Azure SQL Database) Expertise in ensuring data security patterns (e.g. tokenization, encryption, obfuscation) Knowledge of insurance policy operations, regulations, and compliance frameworks specific to Consumer lines Familiarity with Agile methodologies and experience working in Agile project environments Understanding of advanced analytics, AI, and machine learning concepts as they pertain to data architecture Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers

Posted 2 days ago

Apply

10.0 years

0 Lacs

Telangana

On-site

About Chubb Chubb is a world leader in insurance. With operations in 54 countries and territories, Chubb provides commercial and personal property and casualty insurance, personal accident and supplemental health insurance, reinsurance and life insurance to a diverse group of clients. The company is defined by its extensive product and service offerings, broad distribution capabilities, exceptional financial strength and local operations globally. Parent company Chubb Limited is listed on the New York Stock Exchange (NYSE: CB) and is a component of the S&P 500 index. Chubb employs approximately 40,000 people worldwide. Additional information can be found at: www.chubb.com. About Chubb India At Chubb India, we are on an exciting journey of digital transformation driven by a commitment to engineering excellence and analytics. We are proud to share that we have been officially certified as a Great Place to Work® for the third consecutive year, a reflection of the culture at Chubb where we believe in fostering an environment where everyone can thrive, innovate, and grow With a team of over 2500 talented professionals, we encourage a start-up mindset that promotes collaboration, diverse perspectives, and a solution-driven attitude. We are dedicated to building expertise in engineering, analytics, and automation, empowering our teams to excel in a dynamic digital landscape. We offer an environment where you will be part of an organization that is dedicated to solving real-world challenges in the insurance industry. Together, we will work to shape the future through innovation and continuous learning. Reporting to the VP COG ECM enterprise Forms Portfolio Delivery Manager, this role will be responsible for managing and supporting Implementation of a new Document solution for identified applications with the CCM landscape, in APAC. OpenText xPression and Duckcreek has been the corporate document generation tool of choice within Chubb. But xPression going end of life and be unsupported from 2025. A new Customer Communications Management (CCM) platform – Quadient Inspire - has been selected to replace xPression by a global working group and implementation of this new tool (including migration of existing forms/templates from xPression where applicable). Apart from migrating from xPression, there are multiple existing applications to be replaced with Quadient Inspire The role is based in Hyderabad/India with some travel to other Chubb offices. Although there are no direct line management responsibilities within this role, the successful applicant will be responsible for task management of Business Analysts and an Onshore/Offshore development team. The role will require the ability to manage multiple project/enhancement streams with a variety of levels of technical/functional scope and across a number of different technologies. In this role, you will: Lead the design and development of comprehensive data engineering frameworks and patterns. Establish engineering design standards and guidelines for the creation, usage, and maintenance of data across COG (Chubb overseas general) Derive innovation and build highly scalable real-time data pipelines and data platforms to support the business needs. Act as mentor and lead for the data engineering organization that is business-focused, proactive, and resilient. Promote data governance and master/reference data management as a strategic discipline. Implement strategies to monitor the effectiveness of data management. Be an engineering leader and coach data engineers and be an active member of the data leadership team. Evaluate emerging data technologies and determine their business benefits and impact on the future-state data platform. Develop and promote a strong data management framework, emphasizing data quality, governance, and compliance with regulatory requirements Collaborate with Data Modelers to create data models (conceptual, logical, and physical) Architect meta-data management processes to ensure data lineage, data definitions, and ownership are well-documented and understood Collaborate closely with business leaders, IT teams, and external partners to understand data requirements and ensure alignment with strategic goals Act as a primary point of contact for data engineering discussions and inquiries from various stakeholders Lead the implementation of data architectures on cloud platforms (AWS, Azure, Google Cloud) to improve efficiency and scalability Qualifications Bachelor’s degree in Computer Science, Information Systems, Data Engineering, or a related field; Master’s degree preferred Minimum of 10 years’ experience in data architecture or data engineering roles, with a significant focus in P&C insurance domains preferred. Proven track record of successful implementation of data architecture within large-scale transformation programs or projects Comprehensive knowledge of data modelling techniques and methodologies, including data normalization and denormalization practices Hands on expertise across a wide variety of database (Azure SQL, MongoDB, Cosmos), data transformation (Informatica IICS, Databricks), change data capture and data streaming (Apache Kafka, Apache Flink) technologies Proven Expertise with data warehousing concepts, ETL processes, and data integration tools (e.g., Informatica, Databricks, Talend, Apache Nifi) Experience with cloud-based data architectures and platforms (e.g., AWS Redshift, Google BigQuery, Snowflake, Azure SQL Database) Expertise in ensuring data security patterns (e.g. tokenization, encryption, obfuscation) Knowledge of insurance policy operations, regulations, and compliance frameworks specific to Consumer lines Familiarity with Agile methodologies and experience working in Agile project environments Understanding of advanced analytics, AI, and machine learning concepts as they pertain to data architecture Why Chubb? Join Chubb to be part of a leading global insurance company! Our constant focus on employee experience along with a start-up-like culture empowers you to achieve impactful results. Industry leader: Chubb is a world leader in the insurance industry, powered by underwriting and engineering excellence A Great Place to work: Chubb India has been recognized as a Great Place to Work® for the years 2023-2024, 2024-2025 and 2025-2026 Laser focus on excellence: At Chubb we pride ourselves on our culture of greatness where excellence is a mindset and a way of being. We constantly seek new and innovative ways to excel at work and deliver outstanding results Start-Up Culture: Embracing the spirit of a start-up, our focus on speed and agility enables us to respond swiftly to market requirements, while a culture of ownership empowers employees to drive results that matter Growth and success: As we continue to grow, we are steadfast in our commitment to provide our employees with the best work experience, enabling them to advance their careers in a conducive environment Employee Benefits Our company offers a comprehensive benefits package designed to support our employees’ health, well-being, and professional growth. Employees enjoy flexible work options, generous paid time off, and robust health coverage, including treatment for dental and vision related requirements. We invest in the future of our employees through continuous learning opportunities and career advancement programs, while fostering a supportive and inclusive work environment. Our benefits include: Savings and Investment plans: We provide specialized benefits like Corporate NPS (National Pension Scheme), Employee Stock Purchase Plan (ESPP), Long-Term Incentive Plan (LTIP), Retiral Benefits and Car Lease that help employees optimally plan their finances Upskilling and career growth opportunities: With a focus on continuous learning, we offer customized programs that support upskilling like Education Reimbursement Programs, Certification programs and access to global learning programs. Health and Welfare Benefits: We care about our employees’ well-being in and out of work and have benefits like Employee Assistance Program (EAP), Yearly Free Health campaigns and comprehensive Insurance benefits. Application Process Our recruitment process is designed to be transparent, and inclusive. Step 1: Submit your application via the Chubb Careers Portal. Step 2: Engage with our recruitment team for an initial discussion. Step 3: Participate in HackerRank assessments/technical/functional interviews and assessments (if applicable). Step 4: Final interaction with Chubb leadership. Join Us With you Chubb is better. Whether you are solving challenges on a global stage or creating innovative solutions for local markets, your contributions will help shape the future. If you value integrity, innovation, and inclusion, and are ready to make a difference, we invite you to be part of Chubb India’s journey. Apply Now: Chubb External Careers

Posted 2 days ago

Apply

130.0 years

2 - 3 Lacs

Hyderābād

On-site

Job Description The Software Engineer is required to work on the Sequence Repository product for small and large molecules in the Bioinformatic area. This is aligned strategically with a multi year need. Current Employees apply HERE Current Contingent Workers apply HERE Secondary Language(s) Job Description: The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our companys’ IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Responsibilities Design, develop, and maintain data ingestion workflows using DIWF or similar frameworks. Build and optimize ETL/ELT pipelines for data consolidation, normalization, indexing, and cleaning. Develop and manage data models and transformations using DBT. Work with large-scale data storage solutions including AWS S3 and Data Lakes. Implement and maintain SQL-based data warehouses and bioinformatics databases such as BioSQL. Develop and maintain RESTful APIs using Flask, including asynchronous API endpoints for scalable data access. Collaborate with HPC teams to integrate high-performance computing resources for compute-intensive bioinformatics tasks. Utilize Databricks and Apache Spark for distributed data processing and analytics. Ensure best practices in code quality, version control, testing, and deployment. Collaborate with cross-functional teams including data scientists, bioinformaticians, and cloud engineers. Requirements: Minimum bachelor’s degree in computer science, Bioinformatics, Data Science, or related STEM (Science, technology, engineering, and mathematics) field Proven experience in data engineering, including data ingestion, transformation, and pipeline development. Strong proficiency in Python, including experience with Biopython for bioinformatics workflows. Experience with Databricks and Apache Spark for big data processing. Hands-on experience with AWS services including EC2, S3, and EBS. Familiarity with High-Performance Computing (HPC) environments. Expertise in SQL and experience working with bioinformatics databases such as BioSQL. Experience with DBT for data modeling and transformation. Strong skills in developing RESTful APIs using Flask, including asynchronous API design. Knowledge of data lake architectures and best practices. Familiarity with data consolidation, normalization, indexing, and cleaning techniques. Experience with version control systems (e.g., Git) and CI/CD pipelines. Excellent problem-solving skills and ability to work in a collaborative team environment. Who we are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Data Engineering, Data Visualization, Design Applications, Software Configurations, Software Development, Software Development Life Cycle (SDLC), Solution Architecture, System Designs, Systems Integration, Testing Preferred Skills: Job Posting End Date: 08/29/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R357001

Posted 2 days ago

Apply

5.0 - 9.0 years

7 - 8 Lacs

Hyderābād

On-site

Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. [Data Engineer] What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Ø Design, develop, and maintain data solutions for data generation, collection, and processing Ø Be a key team member that assists in design and development of the data pipeline Ø Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Ø Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ø Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Ø Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Ø Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Ø Implement data security and privacy measures to protect sensitive data Ø Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Ø Collaborate and communicate effectively with product teams Ø Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Ø Identify and resolve complex data-related challenges Ø Adhere to best practices for coding, testing, and designing reusable code/component Ø Explore new tools and technologies that will help to improve ETL platform performance Ø Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Basic Qualifications and Experience: Master's degree / Bachelor's degree and 5 to 9 years Computer Science, IT or related field experience Functional Skills: Must-Have Skills Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.

Posted 2 days ago

Apply

15.0 years

0 Lacs

Hyderābād

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Microsoft Azure Databricks Good to have skills : NA Minimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems, contributing to the overall efficiency and reliability of data management within the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve data processes to optimize performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks. - Good To Have Skills: Experience with data warehousing solutions. - Strong understanding of data modeling and database design principles. - Familiarity with data integration tools and techniques. - Experience in developing and maintaining data pipelines using various ETL tools. Additional Information: - The candidate should have minimum 5 years of experience in Microsoft Azure Databricks. - This position is based at our Pune office. - A 15 years full time education is required. 15 years full time education

Posted 2 days ago

Apply

8.0 years

0 Lacs

Delhi

Remote

1. Role : Data engineer Experience : 8+ years Remote Skills : Adf, azure databricks, pyspark Budget : 1.1lpm Note : Need Aadhar, PAN, Education documents, previous companies experience letters and LinkedIn for the BGV after the first technical round to proceed further. Job Type: Full-time Pay: ₹50,000.00 - ₹110,000.00 per year Schedule: Day shift Work Location: In person

Posted 2 days ago

Apply

15.0 years

0 Lacs

Bhubaneshwar

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Microsoft Azure Databricks Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems, contributing to the overall efficiency and reliability of data management within the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve data processes to optimize performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks. - Good To Have Skills: Experience with data warehousing solutions. - Strong understanding of data modeling and database design principles. - Familiarity with data integration tools and techniques. - Experience in developing and maintaining data pipelines using various ETL tools. Additional Information: - The candidate should have minimum 5 years of experience in Microsoft Azure Databricks. - This position is based at our Pune office. - A 15 years full time education is required. 15 years full time education

Posted 2 days ago

Apply

15.0 years

0 Lacs

Indore

On-site

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Microsoft Azure Databricks Good to have skills : NA Minimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary: As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems, contributing to the overall efficiency and reliability of data management within the organization. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Continuously evaluate and improve data processes to optimize performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Azure Databricks. - Good To Have Skills: Experience with data warehousing solutions. - Strong understanding of data modeling and database design principles. - Familiarity with data integration tools and techniques. - Experience in developing and maintaining data pipelines using various ETL tools. Additional Information: - The candidate should have minimum 5 years of experience in Microsoft Azure Databricks. - This position is based at our Pune office. - A 15 years full time education is required. 15 years full time education

Posted 2 days ago

Apply

12.0 years

0 Lacs

Noida

Remote

Principal Software Engineering Manager- Data Engineering Noida, Uttar Pradesh, India Date posted Jul 30, 2025 Job number 1851293 Work site Up to 50% work from home Travel 0-25 % Role type People Manager Profession Software Engineering Discipline Software Engineering Employment type Full-Time Overview Microsoft is a company where passionate innovators come to collaborate, envision what can be and take their careers to levels they cannot achieve anywhere else. This is a world of more possibilities, more innovation, more openness in a cloud-enabled world. The Business & Industry Copilots group is a rapidly growing organization that is responsible for the Microsoft Dynamics 365 suite of products, Power Apps, Power Automate, Dataverse, AI Builder, Microsoft Industry Solution and more. Microsoft is considered one of the leaders in Software as a Service in the world of business applications and this organization is at the heart of how business applications are designed and delivered. This is an exciting time to join our group Customer Experience (CXP) and work on something highly strategic to Microsoft. The goal of CXP Engineering is to build the next generation of our applications running on Dynamics 365, AI, Copilot, and several other Microsoft cloud services to drive AI transformation across Marketing, Sales, Services and Support organizations within Microsoft. We innovate quickly and collaborate closely with our partners and customers in an agile, high-energy environment. Leveraging the scalability and value from Azure & Power Platform, we ensure our solutions are robust and efficient. Our organization’s implementation acts as reference architecture for large companies and helps drive product capabilities. If the opportunity to collaborate with a diverse engineering team, on enabling end-to-end business scenarios using cutting-edge technologies and to solve challenging problems for large scale 24x7 business SaaS applications excite you, please come and talk to us! We are hiring a passionate Principal SW Engineering Manager to lead a team of highly motivated and talented software developers building highly scalable data platforms and deliver services and experiences for empowering Microsoft’s customer, seller and partner ecosystem to be successful. This is a unique opportunity to use your leadership skills and experience in building core technologies that will directly affect the future of Microsoft on the cloud. In this position, you will be part of a fun-loving, diverse team that seeks challenges, loves learning and values teamwork. You will collaborate with team members and partners to build high-quality and innovative data platforms with full stack data solutions using latest technologies in a dynamic and agile environment and have opportunities to anticipate future technical needs of the team and provide technical leadership to keep raising the bar for our competition. We use industry-standard technology: C#, JavaScript/Typescript, HTML5, ETL/ELT, Data warehousing, and/ or Business Intelligence Development. Qualifications Basic Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or a related technical field. 12+ years of experience of building high scale enterprise Business Intelligence and data engineering solutions. 3+ years of management experience leading a high-performance engineering team. Proficient in designing and developing distributed systems on cloud platform. Must be able to plan work, and work to a plan adapting as necessary in a rapidly evolving environment. Experience using a variety of data stores, including data ETL/ELT, warehouses, RDBMS, in-memory caches, and document Databases. Experience using ML, anomaly detection, predictive analysis, exploratory data analysis. A strong understanding of the value of Data, data exploration and the benefits of a data-driven organizational culture. Strong communication skills and proficiency with executive communications Demonstrated ability to effectively lead and operate in cross-functional global organization Preferred Qualifications: Prior experience as an engineering site leader is a strong plus. Proven success in recruiting and scaling engineering organizations effectively. Demonstrated ability to provide technical leadership to teams, with experience managing large-scale data engineering projects. Hands-on experience working with large data sets using tools such as SQL, Databricks, PySparkSQL, Synapse, Azure Data Factory, or similar technologies. Expertise in one or more of the following areas: AI and Machine Learning. Experience with Business Intelligence or data visualization tools, particularly Power BI, is highly beneficial #BICJobs Responsibilities As a leader of the engineering team, you will be responsible for the following: Build and lead a world class data engineering team. Passionate about technology and obsessed about customer needs. Champion data-driven decisions for features identification, prioritization and delivery. Managing multiple projects, including timelines, customer interaction, feature tradeoffs, etc. Delivering on an ambitious product and services roadmap, including building new services on top of vast amount data collected by our batch and near real time data engines. Design and architect internet scale and reliable services. Leveraging machine learning(ML) models knowledge to select appropriate solutions for business objectives. Communicate effectively and build relationship with our partner teams and stakeholders. Help shape our long-term architecture and technology choices across the full client and services stack. Understand the talent needs of the team and help recruit new talent. Mentoring and growing other engineers to bring in efficiency and better productivity. Experiment with and recommend new technologies that simplify or improve the tech stack. Work to help build an inclusive working environment. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work.  Industry leading healthcare  Educational resources  Discounts on products and services  Savings and investments  Maternity and paternity leave  Generous time away  Giving programs  Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.

Posted 2 days ago

Apply

10.0 years

0 Lacs

India

On-site

Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. Data Engineer Locations- Kochi/Chennai/Coimbatore/Mumbai/Pune/Hyderabad Job Overview : We are seeking a highly skilled and experienced Senior Data Engineer to join our growing data team. The ideal candidate will have deep expertise in Azure Databricks and Python, and experience building scalable data pipelines. Familiarity with Data Fabric architectures is a plus. You’ll work closely with data scientists, analysts, and business stakeholders to deliver robust data solutions that drive insights and innovation. Key Responsibilities Design, build, and maintain large-scale, distributed data pipelines using Azure Databricks and Py Spark. Design, build, and maintain large-scale, distributed data pipelines using Azure Data Factory Develop and optimize data workflows and ETL processes in Azure Cloud environments. Write clean, maintainable, and efficient code in Python for data engineering tasks. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Monitor and troubleshoot data pipelines for performance and reliability issues. Implement data quality checks, validations, and ensure data lineage and governance. Contribute to the design and implementation of a Data Fabric architecture (desirable). Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. 5–10 years of experience in data engineering or related roles. Expertise in Azure Databricks, Delta Lake, and Spark. Strong proficiency in Python, especially in a data processing context. Experience with Azure Data Lake, Azure Data Factory, and related Azure services. Hands-on experience in building data ingestion and transformation pipelines. Familiarity with CI/CD pipelines and version control systems (e.g., Git). Good To Have Experience or understanding of Data Fabric concepts (e.g., data virtualization, unified data access, metadata-driven architectures). Knowledge of modern data warehousing and lakehouse principles. Exposure to tools like Apache Airflow, dbt, or similar. Experience working in agile/scrum environments. DP-500 and DP-600 Certifications What We Offer Competitive salary and performance-based bonuses. Flexible work arrangements. Opportunities for continuous learning and career growth. A collaborative, inclusive, and innovative work culture. www.orioninc.com (21) Orion Innovation: Company Page Admin | LinkedIn Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC And Its Subsidiaries And Its Affiliates (collectively, “Orion,” “we” Or “us”) Are Committed To Protecting Your Privacy. This Candidate Privacy Policy (orioninc.com) (“Notice”) Explains What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy.

Posted 2 days ago

Apply

7.0 - 10.0 years

2 - 9 Lacs

Noida

On-site

Posted On: 30 Jul 2025 Location: Noida, UP, India Company: Iris Software Why Join Us? Are you inspired to grow your career at one of India’s Top 25 Best Workplaces in IT industry? Do you want to do the best work of your life at one of the fastest growing IT services companies ? Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations ? It’s happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client’s most trusted technology partner, and the first choice for the industry’s top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about “Being Your Best” – as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We’re a place where everyone can discover and be their best version. Job Description We need a Sr. Databricks Dev – 7 to 10 years Core skills : Databricks – Level: Advanced SQL (MSSQL Server) – Joins, SQ optimization, basic knowledge of StoredProcedure, Functions PySpark – Level: Advanced Azure Delta lake Python – Basic Mandatory Competencies Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees' success and happiness.

Posted 2 days ago

Apply

0 years

0 Lacs

India

On-site

Description: Senior Full Stack Developer Position Overview: We are seeking a highly skilled Full Stack Developer to join our dynamic team. The ideal candidate will possess a robust understanding of both front-end and back-end development, with a strong emphasis on creating and maintaining scalable, high-performance applications. This role requires a professional who can seamlessly integrate into our team, contributing to the development of innovative solutions that drive our trading operations. To be eligible for this role, you must be able to demonstrate: • Strong communication and interpersonal skills • Ability to collaborate effectively with internal and external customers • Innovative and analytical thinking • Ability to manage workload under time pressure and changing priorities • Adaptability and willingness to learn new technologies and methodologies Required Skills and Qualifications: • Technical Proficiency: • Expert Front-end React Framework & Backend Python Experience • Proficient in front-end technologies such as HTML, CSS, Strong back-end development skills, or similar languages. • Proficient GIT, & CI/CD experience. • Develop and maintain web applications using modern frameworks and technologies • Help maintain code quality, organization, and automation • Experience with relational database management systems. • Familiarity with cloud services (AWS, Azure, or Google Cloud – Primarily Azure). • Industry Knowledge: • Experience in the oil and gas industry, particularly within trading operations, is highly desirable. • Understanding of market data, trading systems, and financial instruments related to oil and gas. Preferred Qualifications: • Certifications in relevant technologies or methodologies. • Proven experience in building, operating, and supporting robust and performant databases and data pipelines. • Experience with Databricks and Snowflake • Solid understanding of web performance optimization, security, and best practices

Posted 2 days ago

Apply

0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Mandatory : Proficiency in Python with experience in Databricks (PySpark) Good to Have : Hands-on experience with Apache Airflow. Working knowledge of PostgreSQL, MongoDB. Basic experience on cloud technologies like Azure, AWS and Google.

Posted 2 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies