Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 9.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Open Location - Indore, Noida, Gurgaon, Bangalore, Hyderabad, Pune Job Description 6-9 years experience working on Data engineering & ETL/ELT processes, data warehousing, and data lake implementation with AWS services or Azure services. Hands on experience in designing and implementing solutions like creating/deploying jobs, Orchestrating the job/pipeline and infrastructure configurations Expertise in designing and implementing pySpark and Spark SQL based solutions Design and implement data warehouses using Amazon Redshift, ensuring optimal performance and cost efficiency. Good understanding of security, compliance, and governance standards. Roles & Responsibilities Design and implement robust and scalable data pipelines using AWS or Azure services Drive architectural decisions for data solutions on AWS, ensuring scalability, security, and cost-effectiveness. Hands-on experience of Develop and deploy ETL/ELT processes using Glue/Azure data factory, Lambda/Azure functions, Step function/Azure logic apps/MWAA, S3 and Lake formation from various data sources. Strong Proficiency in pySpark, SQL, Python. Proficiency in SQL for data querying and manipulation. Experience with data modelling, ETL processes, and data warehousing concepts. Create and maintain documentation for data pipelines, processes, and following best practices. Knowledge of various Spark Optimization technique, Monitoring and Automation would be a plus. Participate in code reviews and ensure adherence to coding standards and best practices. Understanding of data governance, compliance, and security best practices. Strong problem-solving and troubleshooting skills. Excellent communication and collaboration skills – with understanding on stakeholder mapping Mandatory Skills - AWS OR Azure Cloud, Python Programming, SQL, Spark SQL, Hive, Spark optimization techniques and Pyspark. Share resume at sonali.mangore@impetus.com with details (CTC, Expected CTC, Notice Period)
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are hiring for one the IT product-based company Job Title: - Senior Data Engineer Exp-5+ years Location: - Gurgaon/Pune Work Mode: - Hybrid Skills: - Azure and Databricks Programming Language- Python, Powershell, .Net/Java are plus What you will do Participate in design and developed highly performing and scalble large-scale Data and Analytics products Participate in requirements grooming, analysis and design discussions with fellow developers, architects and product analysts Participate in product planning by providing estimates on user stories Participate in daily standup meeting and proactively provide status on tasks Develop high-quality code according to business and technical requirements as defined in user stories Write unit tests that will improve the quality of your code Review code for defects and validate implementation details against user stories Work with quality assurance analysts who build test cases that validate your work Demo your solutions to product owners and other stakeholders Work with other Data and Analytics development teams to maintain consistency across the products by following standards and best software development practices Provide third tier support for our product suite What you will bring 3+ years of Data Engineering and Analytics experience 2+ years of Azure and Databricks (or Apache Sparks, Hadoop and Hive) working experience Knowledge and application of the following technical skills: T-SQL/PL-SQL, PySpark, Azure Data Factory, Databricks (or Apache Sparks, Hadoop and Hive), and Power BI or equivalent Business Intelligence tools Understanding of dimension modeling and Data Warehouse concepts Programming skills such as Python, PowerShell, .Net/Java are plus Git repository experience and thorough understanding of branching and merging strategies. 2 years' experience developing in Agile Software Development Life Cycle and Scrum methodology Strong planning, and time management skills Advanced problem-solving skills and data driven Excellent written and oral communication skills Team player who fosters an environment of shared success, is passionate about always learning and improving, self-motivated, open minded, and creative What we would like to see Bachelor's degree in computer science or related field Healthcare knowledge is a plus
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Responsibilities: ✅Build and optimize scalable data pipelines using Python, PySpark, and SQL. ✅Design and develop on AWS stack (S3, Glue, EMR, Athena, Redshift, Lambda). ✅Leverage Databricks for data engineering workflows and orchestration. ✅Implement ETL/ELT processes with strong data modeling (Star/Snowflake schemas). ✅Work on job orchestration using Airflow, Databricks Jobs, or AWS Step Functions. ✅Collaborate with agile, cross-functional teams to deliver reliable data solutions. ✅Troubleshoot and optimize large-scale distributed data environments. Must-Have: ✅4–6+ years in Data Engineering. ✅Hands-on experience in Python, SQL, PySpark, and AWS services. ✅Solid Databricks expertise. ✅Experience with DevOps tools: Git, Jenkins, GitHub Actions. ✅Understanding of data lake/lakehouse/warehouse architectures. Good to Have: ✅AWS/Databricks certifications. ✅Experience with data observability tools (Monte Carlo, Datadog). ✅Exposure to regulated domains like Healthcare or Finance. ✅Familiarity with streaming tools (Kafka, Kinesis, Spark Streaming). ✅Knowledge of modern data concepts (Data Mesh, Data Fabric). ✅Experience with visualization tools: Power BI, Tableau, QuickSight.
Posted 1 week ago
6.0 years
0 Lacs
Delhi, India
On-site
Job Summary: We are looking for a Tech Lead – Data Engineering with 6+ years of hands-on experience in designing and building robust data pipelines and architectures on the Azure cloud platform. The ideal candidate should have strong technical expertise in Azure Data Factory (ADF), Synapse Analytics, and Databricks, with solid coding skills in PySpark and SQL. Experience with Data Mesh architecture and Microsoft Fabric is highly preferred. You will play a key role in end-to-end solutioning, leading data engineering teams, and delivering scalable, high-performance data solutions. Key Responsibilities: · Lead and mentor a team of data engineers across projects and ensure high-quality delivery. · Design, build, and optimize large-scale data pipelines and data integration workflows using ADF and Synapse Analytics. · Architect and implement scalable data solutions on Azure cloud, including Databricks and Microsoft Fabric. · Write efficient and maintainable code using PySpark and SQL for data transformations and processing. · Collaborate with data architects, analysts, and business stakeholders to define data strategies and requirements. · Implement and advocate for Data Mesh principles within the organization. · Provide architectural guidance and perform solutioning for new and existing data projects on Azure. · Ensure data quality, governance, and security best practices are followed. · Stay updated with evolving Azure services and data technologies. Required Skills & Experience: · 6+ years of professional experience in data engineering and solution architecture. · Expertise in Azure Data Factory (ADF) and Azure Synapse Analytics. · Strong hands-on experience with Databricks, PySpark, and advanced SQL. · Good knowledge of Microsoft Fabric and its use cases. · Deep understanding of Azure cloud services related to data storage, processing, and integration. · Familiarity with Data Mesh architecture and distributed data product ownership. · Strong problem-solving and debugging skills. · Excellent communication and stakeholder management abilities. Good to Have: · Experience with CI/CD pipelines for data solutions. · Knowledge of data security and compliance practices on Azure. · Certification in Azure Data Engineering or Solution Architecture.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are seeking a skilled and experienced Technical Lead – Data Observability to spearhead the development and enhancement of a large-scale data observability platform on AWS. This platform plays a mission-critical role in delivering real-time monitoring, reporting, and actionable insights across the client’s data ecosystem. The ideal candidate will have strong technical acumen in AWS data services, proven leadership experience in engineering teams, and a passion for building scalable, high-performance cloud-based data pipelines. You will work closely with the Programme Technical Lead / Architect to define the platform vision, technical priorities, and key success metrics. Key Responsibilities: Lead the design, development, and deployment of features for the data observability platform. Mentor and guide junior engineers, fostering a culture of technical excellence and collaboration . Collaborate with architects to align on roadmap, architecture, and KPIs for platform evolution. Ensure code quality, performance, and scalability across data engineering solutions. Conduct and participate in code reviews, architecture design discussions, and sprint planning . Support operational readiness including performance tuning, alerting, and incident response. Must-Have Skills & Experience (Non-Negotiable): 5+ years of hands-on experience in Data Engineering or Software Engineering. 3+ years in a technical lead/squad lead capacity. Expertise in AWS Data Services : AWS Glue AWS EMR Amazon Kinesis AWS Lambda Amazon Athena Amazon S3 Strong programming skills in PySpark, Python , and SQL . Proven experience in building and maintaining scalable, production-grade data pipelines on cloud platforms. 💡 Preferred / Nice-to-Have Skills: Familiarity with Data Observability tools (e.g., Monte Carlo, Databand, Bigeye) is a plus. Understanding of DevOps/CI-CD practices using Git, Jenkins, etc. Knowledge of data quality frameworks , metadata management, and data lineage concepts. Exposure to agile methodologies and tools like Jira, Confluence.
Posted 1 week ago
6.0 years
0 Lacs
India
On-site
Responsibilities Design and develop data pipelines and ETL processes. Collaborate with data scientists and analysts to understand data needs. Maintain and optimize data warehousing solutions. Ensure data quality and integrity throughout the data lifecycle. Develop and implement data validation and cleansing routines. Work with large datasets from various sources. Automate repetitive data tasks and processes. Monitor data systems and troubleshoot issues as they arise. Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as a Data Engineer or similar role (Minimum 6+ years’ experience as Data Engineer). Strong proficiency in Python and PySpark. Excellent problem-solving abilities. Strong communication skills to collaborate with team members and stakeholders. Individual Contributor Technical Skills Required Expert Python, PySpark and SQL/Snowflake Advanced Data warehousing, Data pipeline design – Advanced Level Data Quality, Data validation, Data cleansing – Advanced Level Intermediate/Basic Microsoft Fabric, ADF, Databricks, Master Data management/Data Governance Data Mesh, Data Lake/Lakehouse Architecture
Posted 1 week ago
4.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Hiring: Senior Data Engineers (Python + PySpark + GIS) 📍 Location: Hyderabad (5 Days Onsite) Primary Skills : Strong in Python Programming, Pyspark queries, GIS (3 roles) Secondary Skills : Palantir Responsibilities • Develop and enhance data-processing, orchestration, monitoring, and more by leveraging popular open-source software, AWS, and GitLab automation. • Collaborate with product and technology teams to design and validate the capabilities of the data platform • Identify, design, and implement process improvements: automating manual processes, optimizing for usability, re-designing for greater scalability • Provide technical support and usage guidance to the users of our platform’s services. • Drive the creation and refinement of metrics, monitoring, and alerting mechanisms to give us the visibility we need into our production services. Qualifications • Experience building and optimizing data pipelines in a distributed environment • Experience supporting and working with cross-functional teams • Proficiency working in Linux environment • 4+ years of advanced working knowledge of SQL, Python, and PySpark
Posted 1 week ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Role description Job Title: Digital Technologist (DevOps) Department: Information Technology Location: Lower Parel, Mumbai Who we are? Axis Asset Management Company Ltd (Axis AMC), founded in 2009, is one of India’s largest and fastest-growing mutual funds. We proudly serve over 1.3 crore customers across 100+ cities with utmost humility. Our success is built on three founding principles: • Long Term Wealth Creation • Customer-Centric Approach • Sustainable Relationships Our investment philosophy emphasizes risk management and encourages partners and investors to move from transactional investing to fulfilling critical life goals. We offer a diverse range of investment solutions to help customers achieve financial independence and a happier tomorrow. What will you Do? As a DevOps Lead, you will play a pivotal role in driving the automation, scalability, and reliability of our development and deployment processes. Key Responsibilities: 1. CI/CD Pipeline Development: Design, implement, and maintain robust CI/CD workflows using Jenkins, Azure Repos, Docker, and PySpark. Ensure seamless integration with AWS services such as Airflow and EKS. 2. Cloud & Infrastructure Management: Architect and manage scalable, fault-tolerant, and cost-effective cloud solutions using AWS services including EC2, RDS, EKS, DynamoDB, Secret Manager, Control Tower, Transit Gateway, and VPC. 3. Security & Compliance: Implement security best practices across the DevOps lifecycle. Utilize tools like SonarQube, Checkmarx, Trivy, and AWS Inspector to ensure secure application deployments. Manage IAM roles, policies, and service control policies (SCPs). 4. Containerization & Orchestration: Lead container lifecycle management using Docker, Amazon ECS, EKS, and AWS Fargate. Implement orchestration strategies including blue-green deployments, Ingress controllers, and ArgoCD. 5. Frontend & Backend CI/CD: Build and manage CI/CD pipelines for frontend applications (Node.js, Angular, React) and backend microservices (Spring Boot) using tools like Maven and Nexus/Azure Artifacts. 6. Infrastructure as Code (IaC): Develop and maintain infrastructure using Terraform or AWS CloudFormation to support repeatable and scalable deployments. 7. Scripting & Automation: Write and maintain automation scripts in Python, Groovy, and Shell/Bash for deployment, monitoring, and system management tasks. 8. Version Control & Artifact Management: Manage source code and artifacts using Git, Azure Repos, Nexus, and Azure Artifacts. 9. Disaster Recovery & High Availability: Design and implement disaster recovery strategies, multi-AZ, and multi-region architectures to ensure business continuity. 10. Collaboration & Leadership: Work closely with development, QA, and operations teams to streamline workflows and mentor junior team members in DevOps best practices.
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Navi Mumbai, Maharashtra, India
On-site
About Holcim Holcim is the leading partner for sustainable construction, creating value across the built environment from infrastructure and industry to buildings. We offer high-value end-to-end Building Materials and Building Solutions - from foundations and flooring to roofing and walling - powered by premium brands including ECOPlanet, ECOPact and ECOCycle®. More than 45,000 talented Holcim employees in 45 attractive markets - across Europe, Latin America and Asia, Middle East & Africa - are driven by our purpose to build progress for people and the planet, with sustainability and innovation at the core of everything we do. About The Role The Data Engineer will play an important role in enabling business for Data Driven Operations and Decision making in Agile and Product-centric IT environment. Education / Qualification BE / B. Tech from IIT or Tier I / II colleges Certification in Cloud Platforms AWS or GCP Experience Total Experience of 4-8years Hands on experience in python coding is must . Experience in data engineering which includes laudatory account Hands-on experience in Big Data cloud platforms like AWS(redshift, Glue, Lambda), Data Lakes, and Data Warehouses, Data Integration, data pipeline. Experience in SQL, writing code in spark engine using python,pyspark.. Experience in data pipeline and workflow management tools ( such as Azkaban, Luigi, Airflow etc.) Key Personal Attributes Business focused, Customer & Service minded Strong Consultative and Management skills Good Communication and Interpersonal skills
Posted 1 week ago
6.0 years
0 Lacs
India
On-site
Sr. Python Data Engineer Responsibilities Design and develop data pipelines and ETL processes. Collaborate with data scientists and analysts to understand data needs. Maintain and optimize data warehousing solutions. Ensure data quality and integrity throughout the data lifecycle. Develop and implement data validation and cleansing routines. Work with large datasets from various sources. Automate repetitive data tasks and processes. Monitor data systems and troubleshoot issues as they arise. Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as a Data Engineer or similar role (Minimum 6+ years’ experience as Data Engineer). Strong proficiency in Python and PySpark. Excellent problem-solving abilities. Strong communication skills to collaborate with team members and stakeholders. Individual Contributor Technical Skills Required Expert Python, PySpark and SQL/Snowflake Advanced Data warehousing, Data pipeline design – Advanced Level Data Quality, Data validation, Data cleansing – Advanced Level Intermediate/Basic Microsoft Fabric, ADF, Databricks, Master Data management/Data Governance Data Mesh, Data Lake/Lakehouse Architecture
Posted 1 week ago
5.0 years
0 Lacs
Greater Chennai Area
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : PySpark Good to have skills : Apache Spark Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. You will be responsible for ensuring that the applications are developed according to the specified requirements and are aligned with the business goals. Your typical day will involve collaborating with the team to understand the application requirements, designing and developing the applications using PySpark, and configuring the applications to meet the business process needs. You will also be responsible for testing and debugging the applications to ensure their functionality and performance. Roles & Responsibilities: - Expected to be an SME, collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Design and build applications using PySpark. - Configure applications to meet business process requirements. - Collaborate with the team to understand application requirements. - Test and debug applications to ensure functionality and performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark. - Good To Have Skills: Experience with Apache Spark. - Strong understanding of statistical analysis and machine learning algorithms. - Experience with data visualization tools such as Tableau or Power BI. - Hands-on implementing various machine learning algorithms such as linear regression, logistic regression, decision trees, and clustering algorithms. - Solid grasp of data munging techniques, including data cleaning, transformation, and normalization to ensure data quality and integrity. Additional Information: - The candidate should have a minimum of 5 years of experience in PySpark. - This position is based at our Chennai office. - A 15 years full time education is required.
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Developer, you will engage in the design, construction, and configuration of applications tailored to fulfill specific business processes and application requirements. Your typical day will involve collaborating with team members to understand project needs, developing innovative solutions, and ensuring that applications are optimized for performance and usability. You will also participate in testing and debugging processes to deliver high-quality applications that meet user expectations and business goals. Roles & Responsibilities: - Expected to perform independently and become an SME. - Required active participation/contribution in team discussions. - Contribute in providing solutions to work related problems. - Assist in the documentation of application specifications and user guides. - Collaborate with cross-functional teams to gather requirements and provide technical insights. Professional & Technical Skills: At least 3+ years of overall experience Experience in building data solutions using Azure Databricks and PySpark Experience in building complex SQL queries , understand data warehousing concepts Experience with Azure DevOps CI/CD pipelines for automated deployment and release management Good to have : Experience with Snowflake data warehousing Excellent problem-solving and analytical skills Ability to work independently as well as collaboratively in a team environment Good to have : Experience building pipelines, Data Flows using Azure Data Factory Strong communication and interpersonal skills Additional Information: - The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform. - This position is based at our Hyderabad office. - A 15 years full time education is required., 15 years full time education
Posted 1 week ago
6.0 - 10.0 years
16 - 30 Lacs
Amritsar
Remote
Job Title: Senior Azure Data Engineer Location: Remote Experience Required: 5+ years About the Role: We are seeking a highly skilled Senior Azure Data Engineer to design and develop robust, scalable, and high-performance data pipelines using Azure technologies. The ideal candidate will have strong experience with modern data platforms and tools, including Azure Data Factory, Synapse, Databricks, and Data Lake, as well as expertise in SQL, Python, and CI/CD workflows. Key Responsibilities: Design and implement end-to-end data pipelines using Azure Data Factory, Azure Synapse Analytics, Azure Databricks, and Azure Data Lake Storage Gen2. Ingest and integrate data from various sources such as SQL Server, APIs, blob storage, and on-premise systems, ensuring security and performance. Develop and manage ETL/ELT workflows and orchestrations in a scalable, optimized manner. Build and maintain data models, data marts, and data warehouse structures for analytics and reporting. Write and optimize complex SQL queries, stored procedures, and Python scripts. Ensure data quality, consistency, and integrity through validation frameworks and best practices. Support and enhance CI/CD pipelines using Azure DevOps, Git, and ARM/Bicep templates. Collaborate with data scientists, analysts, and business stakeholders to understand requirements and deliver impactful solutions. Enforce data governance, security, and compliance policies, including use of Azure Key Vault and access controls. Mentor junior data engineers, lead design discussions, and conduct code reviews. Monitor and troubleshoot issues related to performance, cost, and scalability across data systems. Required Skills & Experience: 6+ years of experience in data engineering or related fields. 3+ years of hands-on experience with Azure cloud services, specifically: Azure Data Factory (ADF) Azure Synapse Analytics (Dedicated and Serverless SQL Pools) Azure Databricks (Spark preferred) Azure Data Lake Storage Gen2 (ADLS) Azure SQL / Managed Instance / Cosmos DB Strong proficiency in SQL, PySpark, and Python. Solid experience with CI/CD tools: Azure DevOps, Git, ARM/Bicep templates. Experience with data warehousing, dimensional modeling, and medallion/lakehouse architecture. In-depth knowledge of data security best practices, including encryption, identity management, and network configurations in Azure. Expertise in performance tuning, data partitioning, and cost optimization. Excellent communication, problem-solving, and stakeholder management skills.
Posted 1 week ago
56.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
We are a technology-led healthcare solutions provider. We are driven by our purpose to enable healthcare organizations to be future-ready. We offer accelerated, global growth opportunities for talent that is bold, industrious and nimble. With Indegene, you gain a unique career experience that celebrates entrepreneurship and is guided by passion, innovation, collaboration and empathy. To explore exciting opportunities at the convergence of healthcare and technology, check out www.careers.indegene.com. What if we told you that you can move to an exciting role in an entrepreneurial organization without the usual risks associated with it? We understand that you are looking for growth and variety in your career at this point and we would love for you to join us in our journey and grow with us. At Indegene, our roles come with the excitement you require at this stage of your career with the reliability you seek. We hire the best and trust them from day 1 to deliver global impact, handle teams and be responsible for the outcomes while our leaders support and mentor you. We are a profitable rapidly growing global organization and are scouting for the best talent for this phase of growth. With us, you are at the intersection of two of the most exciting industries of healthcare and technology. We offer global opportunities with fast-track careers while you work with a team that is fueled by purpose. The combination of these will lead to a truly differentiated experience for you. If this excites you, then apply below. Role: Senior Analyst - Data Science Descriptions: We are looking for a results-driven and hands-on Lead Data Scientist / Analyst with 56 years of experience to lead analytical solutioning and model development in the pharmaceutical commercial analytics domain. The ideal candidate will play a central role in designing and deploying Decision Engine frameworks, implementing advanced analytics solutions, and mentoring junior team members. Key Responsibilities Partner with cross-functional teams and client stakeholders to gather business requirements and translate them into robust ML/analytical solutions. Design and implement Decision Engine workflows to support Next Best Action (NBA) recommendations in omnichannel engagement strategies. Analyze large and complex datasets across sources like APLD, sales, CRM, call plans, market share, patient claims, and segmentation data. Perform ad hoc and deep-dive analyses to address critical business questions across commercial and medical teams. Develop, validate, and maintain predictive models for use cases such as patient journey analytics, HCP targeting, sales forecasting, risk scoring, and marketing mix modeling. Implement MLOps pipelines using Dataiku, Git, and AWS services to support scalable and repeatable deployment of analytics models. Ensure data quality through systematic QC checks, test case creation, and validation frameworks. Lead and mentor junior analysts and data scientists in coding best practices, feature engineering, model interpretability, and cloud-based workflows. Stay up to date with industry trends, regulatory compliance, and emerging data science techniques relevant to life sciences analytics. Must Have 5 years of hands-on experience in pharmaceutical commercial analytics, with exposure to cross-functional brand analytics, omnichannel measurement, and ML modeling. At least 3 years of experience developing and deploying predictive models and ML pipelines in real-world settings. Proven experience with data platforms such as Snowflake, Dataiku, AWS, and proficiency in PySpark, Python, and SQL. Experience with MLOps practices, including version control, model monitoring, and automation. Strong understanding of pharmaceutical data assets (e.g., APLD, DDD, NBRx, TRx, specialty pharmacy, CRM, digital engagement). Proficiency in ML algorithms (e.g., XGBoost, Random Forest, SVM, Logistic Regression, Neural Networks, NLP). Experience in key use cases: Next Best Action, Recommendation Engines, Attribution Models, Segmentation, Marketing ROI, Collaborative Filtering. Hands-on expertise in building explainable ML models and using tools for model monitoring and retraining. Familiarity with dashboarding tools like Tableau or PowerBI is a plus. Strong communication and documentation skills to effectively convey findings to both technical and non-technical audiences. Ability to work in a dynamic, fast-paced environment and deliver results under tight timelines. EQUAL OPPORTUNITY Indegene is proud to be an Equal Employment Employer and is committed to the culture of Inclusion and Diversity. We do not discriminate on the basis of race, religion, sex, colour, age, national origin, pregnancy, sexual orientation, physical ability, or any other characteristics. All employment decisions, from hiring to separation, will be based on business requirements, the candidates merit and qualification. We are an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, colour, religion, sex, national origin, gender identity, sexual orientation, disability status, protected veteran status, or any other characteristics. Locations - Bangalore, KA, IN
Posted 1 week ago
8.0 - 13.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. Job Description What you will do As a Data Engineer, you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Doctorate degree / Master's degree / Bachelor's degree and 8 to 13 years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing Experienced with software engineering best-practices, including but not limited to version control (Git, Subversion, etc.), CI/CD (Jenkins, Maven etc.), automated unit testing, and Dev Ops Proficient in SQL, Python for extracting, transforming, and analyzing complex datasets from relational data stores Proficient in Python with strong experience in ETL tools such as Apache Spark and various data processing packages, supporting scalable data workflows and machine learning pipeline development. Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Knowledge on Data visualization and analytics tools like Spotfire, PowerBI Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Strong knowledge on Oracle / SQL Server, Stored Procedure, PL/SQL, Knowledge on LINUX OS Experience in implementing Retrieval-Augmented Generation (RAG) pipelines, integrating retrieval mechanisms with language models. Skilled in developing machine learning models using Python, with hands-on experience in deep learning frameworks including PyTorch and TensorFlow. Strong understanding of data governance frameworks, tools, and best practices. Knowledge of vector databases, including implementation and optimization. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: Databricks Certificate preferred AWS Data Engineer/Architect Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
40.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
[Role Name : IS Architecture] Job Posting Title: Data Architect Workday Job Profile : Principal IS Architect Department Name: Digital, Technology & Innovation Role GCF: 06A About Amgen Amgen harnesses the best of biology and technology to fight the world’s toughest diseases, and make people’s lives easier, fuller and longer. We discover, develop, manufacture and deliver innovative medicines to help millions of patients. Amgen helped establish the biotechnology industry more than 40 years ago and remains on the cutting-edge of innovation, using technology and human genetic data to push beyond what’s known today. About The Role Role Description: The role is responsible for developing and maintaining the data architecture of the Enterprise Data Fabric. Data Architecture includes the activities required for data flow design, data modeling, physical data design, query performance optimization. The Data Architect is a senior-level position responsible for developing business information models by studying the business, our data, and the industry. This role involves creating data models to realize a connected data ecosystem that empowers consumers. The Data Architect drives cross-functional data interoperability, enables efficient decision-making, and supports AI usage of Foundational Data. This role will manage a team of Data Modelers. Roles & Responsibilities: Provide oversight to data modeling team members. Develop and maintain conceptual logical, and physical data models and to support business needs Establish and enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Maintain comprehensive documentation of the architecture, including principles, standards, and models Evaluate and recommend technologies and tools that best fit the solution requirements Evaluate emerging technologies and assess their potential impact. Drive continuous improvement in the architecture by identifying opportunities for innovation and efficiency Basic Qualifications and Experience: [GCF Level 6A] Doctorate Degree and 8 years of experience in Computer Science, IT or related field OR Master’s degree with 12 - 15 years of experience in Computer Science, IT or related field OR Bachelor’s degree with 14 - 17 years of experience in Computer Science, IT or related field Functional Skills: Must-Have Skills : Data Modeling: Expert in creating conceptual, logical, and physical data models to represent information structures. Ability to interview and communicate with business Subject Matter experts to develop data models that are useful for their analysis needs. Metadata Management: Knowledge of metadata standards, taxonomies, and ontologies to ensure data consistency and quality. Information Governance: Familiarity with policies and procedures for managing information assets, including security, privacy, and compliance. Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), performance tuning on big data processing Good-to-Have Skills: Experience with Graph technologies such as Stardog, Allegrograph, Marklogic Professional Certifications Certifications in Databricks are desired Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated awareness of presentation skills Shift Information: This position requires you to work a later shift and may be assigned a second or third shift schedule. Candidates must be willing and able to work during evening or night shifts, as required based on business requirements. EQUAL OPPORTUNITY STATEMENT Amgen is an Equal Opportunity employer and will consider you without regard to your race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, or disability status. We will ensure that individuals with disabilities are provided with reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do The Sr Associate Software Engineer is responsible for designing, developing, and maintaining software applications and solutions that meet business needs and ensuring the availability and performance of critical systems and applications. This role involves working closely with product managers, designers, data engineers, and other engineers to create high-quality, scalable software solutions and automating operations, monitoring system health, and responding to incidents to minimize downtime. Roles & Responsibilities: Possesses strong rapid prototyping skills and can quickly translate concepts into working code Contribute to both front-end and back-end development using cloud technology Develop innovative solution using generative AI technologies Ensure code quality and adherence to best practices Create and maintain documentation on software architecture, design, deployment, disaster recovery, and operations Identify and resolve technical challenges effectively Stay updated with the latest trends and advancements Work closely with product team, business team, and other stakeholders Design, develop, and implement applications and modules, including custom reports, interfaces, and enhancements Analyze and understand the functional and technical requirements of applications, solutions and systems and translate them into software architecture and design specifications Develop and execute unit tests, integration tests, and other testing strategies to ensure the quality of the software Identify and resolve software bugs and performance issues Work closely with cross-functional teams, including product management, design, and QA, to deliver high-quality software on time Customize modules to meet specific business requirements Work on integrating with other systems and platforms to ensure seamless data flow and functionality Provide ongoing support and maintenance for applications, ensuring that they operate smoothly and efficiently What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of Computer Science, IT or related field experience. Must-Have Skills: Proficiency in Python/PySpark development, Flask/Fast API, C#, ASP.net, PostgreSQL, Oracle, Databricks, DevOps Tools, CI/CD, Data Ingestion. Candidates should be able to write clean, efficient, and maintainable code. Knowledge of HTML, CSS, and JavaScript, along with popular front-end frameworks like React or Angular, is required to build interactive and responsive web applications In-depth knowledge of data engineering concepts, ETL processes, and data architecture principles. Strong understanding of cloud computing principles, particularly within the AWS ecosystem Strong understanding of software development methodologies, including Agile and Scrum Experience with version control systems like Git Hands on experience with various cloud services, understand pros and cons of various cloud service in well architected cloud design principles Strong problem solving, analytical skills; Ability to learn quickly; Excellent communication and interpersonal skills Experienced with API integration, serverless, microservices architecture. Experience in SQL/NOSQL database, vector database for large language models Preferred Qualifications: Strong understanding of cloud platforms (e.g., AWS, GCP, Azure) and containerization technologies (e.g., Docker, Kubernetes) Experience with monitoring and logging tools (e.g., Prometheus, Grafana, Splunk) Experience with data processing tools like Spark, or similar Experience with SAP integration technologies Soft Skills: Excellent analytical and troubleshooting skills Strong verbal and written communication skills Ability to work effectively with global, virtual teams High degree of initiative and self-motivation Ability to manage multiple priorities successfully Team-oriented, with a focus on achieving team goals Strong presentation and public speaking skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do As a Data Engineer, you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Strong knowledge on Oracle / SQL Server, Stored Procedure, PL/SQL, Knowledge on LINUX OS Knowledge on Data visualization and analytics tools like Spotfire, PowerBI Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: Databricks Certificate preferred AWS Data Engineer/Architect Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. What You Will Do As a Data Engineer, you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Design, develop, and maintain data solutions for data generation, collection, and processing Be a key team member that assists in design and development of the data pipeline Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Implement data security and privacy measures to protect sensitive data Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Collaborate and communicate effectively with product teams Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Identify and resolve complex data-related challenges Adhere to best practices for coding, testing, and designing reusable code/component Explore new tools and technologies that will help to improve ETL platform performance Participate in sprint planning meetings and provide estimations on technical implementation What We Expect Of You We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Master's degree / Bachelor's degree and 5 to 9 years of Computer Science, IT or related field experience. Must-Have Skills: Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL),Snowflake, workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) Proficient in SQL for extracting, transforming, and analyzing complex datasets from relational data stores Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Proven ability to optimize query performance on big data platforms Preferred Qualifications: Experience with Software engineering best-practices, including but not limited to version control, infrastructure-as-code, CI/CD, and automated testing Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Strong knowledge on Oracle / SQL Server, Stored Procedure, PL/SQL, Knowledge on LINUX OS Knowledge on Data visualization and analytics tools like Spotfire, PowerBI Strong understanding of data governance frameworks, tools, and best practices. Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Professional Certifications: Databricks Certificate preferred AWS Data Engineer/Architect Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What You Can Expect Of Us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 1 week ago
8.0 - 12.0 years
15 - 25 Lacs
Gurugram, Delhi / NCR
Work from Office
Skills Requirements: Must have - Python, Pytest, SQL, ETL Automation, AWS, Data warehousing Good to have - Java, Selenium, API Automation, Rest Assured, Postman Proficient in Automation testing tools such as Selenium or Appium • Knowledge of scripting languages such as Python or JavaScript • Experience with Test Automation frameworks and best practices • Familiarity with Agile testing methodologies • Must have excellent communication skills and be able to communicate complex technical information to non-technical stakeholders in a clear and concise manner. • Must understand the company's long-term vision and align with it. • Should be open to new ideas and be willing to learn and develop new skills. • Should also be able to work well under pressure and manage multiple tasks and priorities.
Posted 1 week ago
3.0 years
0 Lacs
India
Remote
We are seeking a skilled Sr Azure Data Engineer with hands-on experience in modern data engineering tools and platforms within the Azure ecosystem . The ideal candidate will have a strong foundation in data integration, transformation, and migration , along with a passion for working on complex data migration projects . Job Title: Sr. Azure Data Engineer Location: Remote work Work Timings: 2:00 PM – 11:00 PM IST No of Openings: 2 Please Note: This is a pure Azure-specific role . If your expertise is primarily in AWS or GCP , we kindly request that you do not apply . Lead the migration of large-scale SQL workloads from on-premise environments to Azure, ensuring high data integrity, minimal downtime, and performance optimization. Design, develop, and manage end-to-end data pipelines using Azure Data Factory or Synapse Data Factory to orchestrate migration and ETL processes. Build and administer scalable, secure Azure Data Lakes to store and manage structured and unstructured data during and post-migration. Utilize Azure Databricks , Synapse Spark Pools , Python , and PySpark for advanced data transformation and processing. Develop and fine-tune SQL/T-SQL scripts for data extraction, cleansing, transformation, and reporting in Azure SQL Database , SQL Managed Instances , and SQL Server . Design and maintain ETL solutions using SQL Server Integration Services (SSIS) , including reengineering SSIS packages for Azure compatibility. Collaborate with cloud architects, DBAs, and application teams to assess existing workloads and define the best migration approach. Continuously monitor and optimize data workflows for performance, reliability, and cost-effectiveness across Azure platforms. Enforce best practices in data governance, security, and compliance throughout the migration lifecycle. Required Skills and Qualifications: 3+ years of hands-on experience in data engineering , with a clear focus on SQL workload migration to Azure . Deep expertise in: Azure Data Factory / Synapse Data Factory, Azure Data Lake, Azure Databricks / Synapse Spark Pools, Python and PySpark, SQL SSIS – design, development, and migration to Azure Proven track record of delivering complex data migration projects (on-prem to Azure, or cloud-to-cloud). Experience re-platforming or re-engineering SSIS packages for Azure Data Factory or Azure-SSIS Integration Runtime. Microsoft Certified: Azure Data Engineer Associate or similar certification preferred. Strong problem-solving skills, attention to detail, and ability to work in fast-paced environments. Excellent communication skills with the ability to collaborate across teams and present migration strategies to stakeholders. If you believe you are qualified and are looking forward to setting your career on a fast-track, apply by submitting a few paragraphs explaining why you believe you are the right person for this role. To know more about Techolution, visit our website: www.techolution.com If you believe you are qualified and are looking forward to setting your career on a fast-track, apply by submitting a few paragraphs explaining why you believe you are the right person for this role.To know more about Techolution, visit our website: www.techolution.com About Techolution: Techolution is a next gen AI consulting firm on track to become one of the most admired brands in the world for "AI done right". Our purpose is to harness our expertise in novel technologies to deliver more profits for our enterprise clients while helping them deliver a better human experience for the communities they serve. At Techolution, we build custom AI solutions that produce revolutionary outcomes for enterprises worldwide. Specializing in "AI Done Right," we leverage our expertise and proprietary IP to transform operations and help achieve business goals efficiently. We are honored to have recently received the prestigious Inc 500 Best In Business award , a testament to our commitment to excellence. We were also awarded - AI Solution Provider of the Year by The AI Summit 2023, Platinum sponsor at Advantage DoD 2024 Symposium and a lot more exciting stuff! While we are big enough to be trusted by some of the greatest brands in the world, we are small enough to care about delivering meaningful ROI-generating innovation at a guaranteed price for each client that we serve. Our thought leader, Luv Tulsidas, wrote and published a book in collaboration with Forbes, “Failing Fast? Secrets to succeed fast with AI”. Refer here for more details on the content - https://www.luvtulsidas.com/ Let's explore further! Uncover our unique AI accelerators with us: 1. Enterprise LLM Studio : Our no-code DIY AI studio for enterprises. Choose an LLM, connect it to your data, and create an expert-level agent in 20 minutes. 2. AppMod. AI : Modernizes ancient tech stacks quickly, achieving over 80% autonomy for major brands! 3. ComputerVision. AI : Our ComputerVision. AI Offers customizable Computer Vision and Audio AI models, plus DIY tools and a Real-Time Co-Pilot for human-AI collaboration! 4. Robotics and Edge Device Fabrication : Provides comprehensive robotics, hardware fabrication, and AI-integrated edge design services. 5. RLEF AI Platform : Our proven Reinforcement Learning with Expert Feedback (RLEF) approach bridges Lab-Grade AI to Real-World AI. Some videos you wanna watch! Computer Vision demo at The AI Summit New York 2023 Life at Techolution GoogleNext 2023 Ai4 - Artificial Intelligence Conferences 2023 WaWa - Solving Food Wastage Saving lives - Brooklyn Hospital Innovation Done Right on Google Cloud Techolution featured on Worldwide Business with KathyIreland Techolution presented by ION World’s Greatest Visit us @ www.techolution.com : To know more about our revolutionary core practices and getting to know in detail about how we enrich the human experience with technology.
Posted 1 week ago
6.0 - 7.0 years
15 - 17 Lacs
India
On-site
About The Opportunity This role is within the fast-paced enterprise technology and data engineering sector, delivering high-impact solutions in cloud computing, big data, and advanced analytics. We design, build, and optimize robust data platforms powering AI, BI, and digital products for leading Fortune 500 clients across industries such as finance, retail, and healthcare. As a Senior Data Engineer, you will play a key role in shaping scalable, production-grade data solutions with modern cloud and data technologies. Role & Responsibilities Architect and Develop Data Pipelines: Design and implement end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark, and cloud object storage. Data Warehouse & Data Mart Design: Create scalable data warehouses/marts that empower self-service analytics and machine learning workloads. Database Modeling & Optimization: Translate logical models into efficient physical schemas, ensuring optimal partitioning and performance management. ETL/ELT Workflow Automation: Build, automate, and monitor robust data ingestion and transformation processes with best practices in reliability and observability. Performance Tuning: Optimize Spark jobs and SQL queries through careful tuning of configurations, indexing strategies, and resource management. Mentorship and Continuous Improvement: Provide production support, mentor team members, and champion best practices in data engineering and DevOps methodology. Skills & Qualifications Must-Have 6-7 years of hands-on experience building production-grade data platforms, including at least 3 years with Apache Spark/Databricks. Expert proficiency in PySpark, Python, and advanced SQL with a record of performance tuning distributed jobs. Proven expertise in data modeling, data warehouse/mart design, and managing ETL/ELT pipelines using tools like Airflow or dbt. Hands-on experience with major cloud platforms such as AWS or Azure, and familiarity with modern lakehouse/data-lake patterns. Strong analytical, problem-solving, and mentoring skills with a DevOps mindset and commitment to code quality. Preferred Experience with AWS analytics services (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Exposure to streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Familiarity with ML feature stores, MLOps workflows, or data governance frameworks. Relevant certifications (Databricks, AWS, Azure) or active contributions to open source projects. Location: India | Employment Type: Fulltime Skills: agile methodologies,team leadership,performance tuning,sql,elt,airflow,aws,data modeling,apache spark,pyspark,data,hadoop,databricks,python,dbt,big data technologies,etl,azure
Posted 1 week ago
7.0 years
15 - 17 Lacs
India
Remote
Note: This is a remote role with occasional office visits. Candidates from Mumbai or Pune will be preferred About The Company A fast-growing enterprise technology consultancy operating at the intersection of cloud computing, big-data engineering and advanced analytics . The team builds high-throughput, real-time data platforms that power AI, BI and digital products for Fortune 500 clients across finance, retail and healthcare. By combining Databricks Lakehouse architecture with modern DevOps practices, they unlock insight at petabyte scale while meeting stringent security and performance SLAs. Role & Responsibilities Architect end-to-end data pipelines (ingestion → transformation → consumption) using Databricks, Spark and cloud object storage. Design scalable data warehouses/marts that enable self-service analytics and ML workloads. Translate logical data models into physical schemas; own database design, partitioning and lifecycle management for cost-efficient performance. Implement, automate and monitor ETL/ELT workflows, ensuring reliability, observability and robust error handling. Tune Spark jobs and SQL queries, optimizing cluster configurations and indexing strategies to achieve sub-second response times. Provide production support and continuous improvement for existing data assets, championing best practices and mentoring peers. Skills & Qualifications Must-Have 6–7 years building production-grade data platforms, including 3 years+ hands-on Apache Spark/Databricks experience. Expert proficiency in PySpark, Python and advanced SQL, with a track record of performance-tuning distributed jobs. Demonstrated ability to model data warehouses/marts and orchestrate ETL/ELT pipelines with tools such as Airflow or dbt. Hands-on with at least one major cloud platform (AWS or Azure) and modern lakehouse / data-lake patterns. Strong problem-solving skills, DevOps mindset and commitment to code quality; comfortable mentoring fellow engineers. Preferred Deep familiarity with the AWS analytics stack (Redshift, Glue, S3) or the broader Hadoop ecosystem. Bachelor’s or Master’s degree in Computer Science, Engineering or a related field. Experience building streaming pipelines (Kafka, Kinesis, Delta Live Tables) and real-time analytics solutions. Exposure to ML feature stores, MLOps workflows and data-governance/compliance frameworks. Relevant professional certifications (Databricks, AWS, Azure) or notable open-source contributions. Benefits & Culture Highlights Remote-first & flexible hours with 25+ PTO days and comprehensive health cover. Annual training budget & certification sponsorship (Databricks, AWS, Azure) to fuel continuous learning. Inclusive, impact-focused culture where engineers shape the technical roadmap and mentor a vibrant data community Skills: data modeling,big data technologies,team leadership,aws,data,sql,agile methodologies,performance tuning,elt,airflow,apache spark,pyspark,hadoop,databricks,python,dbt,etl,azure
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Title: Intermediate Data Developer – Azure ADF and Databricks Experience Range: 5-7 Years Location: Chennai, Hybrid Employment Type: Full-Time About UPS UPS is a global leader in logistics, offering a broad range of solutions that include transportation, distribution, supply chain management, and e-commerce. Founded in 1907, UPS operates in over 220 countries and territories, delivering packages and providing specialized services worldwide. Our mission is to enable commerce by connecting people, places, and businesses, with a strong focus on sustainability and innovation. About UPS Supply Chain Symphony™ The UPS Supply Chain Symphony™ platform is a cloud-based solution that seamlessly integrates key supply chain components, including shipping, warehousing, and inventory management, into a unified platform. This solution empowers businesses by offering enhanced visibility, advanced analytics, and customizable dashboards to streamline global supply chain operations and decision-making. About The Role We are seeking an experienced Senior Data Developer to join our data engineering team responsible for building and maintaining complex data solutions using Azure Data Factory (ADF), Azure Databricks , and Cosmos DB . The role involves designing and developing scalable data pipelines, implementing data transformations, and ensuring high data quality and performance. The Senior Data Developer will work closely with data architects, testers, and analysts to deliver robust data solutions that support strategic business initiatives. The ideal candidate should possess deep expertise in big data technologies, data integration, and cloud-native data engineering solutions on Microsoft Azure. This role also involves coaching junior developers, conducting code reviews, and driving strategic improvements in data architecture and design patterns. Key Responsibilities Data Solution Design and Development: Design and develop scalable and high-performance data pipelines using Azure Data Factory (ADF). Implement data transformations and processing using Azure Databricks. Develop and maintain NoSQL data models and queries in Cosmos DB. Optimize data pipelines for performance, scalability, and cost efficiency. Data Integration and Architecture: Integrate structured and unstructured data from diverse data sources. Collaborate with data architects to design end-to-end data flows and system integrations. Implement data security, governance, and compliance standards. Performance Tuning and Optimization: Monitor and tune data pipelines and processing jobs for performance and cost efficiency. Optimize data storage and retrieval strategies for Azure SQL and Cosmos DB. Collaboration and Mentoring: Collaborate with cross-functional teams including data testers, architects, and business analysts. Conduct code reviews and provide constructive feedback to improve code quality. Mentor junior developers, fostering best practices in data engineering and cloud development. Primary Skills Data Engineering: Azure Data Factory (ADF), Azure Databricks. Cloud Platform: Microsoft Azure (Data Lake Storage, Cosmos DB). Data Modeling: NoSQL data modeling, Data warehousing concepts. Performance Optimization: Data pipeline performance tuning and cost optimization. Programming Languages: Python, SQL, PySpark Secondary Skills DevOps and CI/CD: Azure DevOps, CI/CD pipeline design and automation. Security and Compliance: Implementing data security and governance standards. Agile Methodologies: Experience in Agile/Scrum environments. Leadership and Mentoring: Strong communication and coaching skills for team collaboration. Soft Skills Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. Educational Qualifications Bachelor's degree in Computer Science, Engineering, Information Technology, or a related field. Relevant certifications in Azure and Data Engineering, such as: Microsoft Certified: Azure Data Engineer Associate Microsoft Certified: Azure Solutions Architect Expert Databricks Certified Data Engineer Associate or Professional About The Team As a Senior Data Developer , you will be working with a dynamic, cross-functional team that includes developers, product managers, and other quality engineers. You will be a key player in the quality assurance process, helping shape testing strategies and ensuring the delivery of high-quality web applications. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Avant de postuler à un emploi, sélectionnez votre langue de préférence parmi les options disponibles en haut à droite de cette page. Découvrez votre prochaine opportunité au sein d'une organisation qui compte parmi les 500 plus importantes entreprises mondiales. Envisagez des opportunités innovantes, découvrez notre culture enrichissante et travaillez avec des équipes talentueuses qui vous poussent à vous développer chaque jour. Nous savons ce qu’il faut faire pour diriger UPS vers l'avenir : des personnes passionnées dotées d’une combinaison unique de compétences. Si vous avez les qualités, de la motivation, de l'autonomie ou le leadership pour diriger des équipes, il existe des postes adaptés à vos aspirations et à vos compétences d'aujourd'hui et de demain. Fiche De Poste Intermediate Azure Developer We’re the obstacle overcomers, the problem get-arounders. From figuring it out to getting it done… our innovative culture demands “yes and how!” We are UPS. We are the United Problem Solvers. About Applications Development At UPS Technology Our technology teams use expertise in applications programming & database technologies to support enterprise infrastructure. They create & support application frameworks & tools. They support deployment of applications & services across a multi-tier environment that processes up to 38 million packages in a single day (4.7 billion annually). This team works closely with our customers to build innovative technologies that are customized to drive business goals & provide the ultimate customer experience. As a member of the applications development family, you will help UPS grow & provide valuable services across the globe. About This Role The Intermediate Azure Developer will analyze business requirements, translating those requirements into Azure specific solutions using the Azure toolsets (Out of the Box, Configuration, Customization). He/She should have the following: experience in designing & building a solution using Azure Declarative & Programmatic Approach, knowledge with Integrating Azure with Salesforce, on premise legacy systems and other cloud solutions, experience with integration middleware and Enterprise Service Bus. He/She should also have experience in Translate design requirements or agile user stories into Azure specific solutions, consuming or sending the message in XML\JSON format to 3rd party using SOAP and REST APIs, expertise in Azure PaaS Service SDKs for .NET, .Net Core, Web API, like Storage, App Insights, Fluent API, Azure App Services, Azure Serverless, Microservices on Azure, API Management, Event Hub, Logic Apps, Service Bus & Message Queues, Azure Storage, Key Vaults and Application Insights, Azure Jobs, etc. He/She collaborates with teams and supports emerging technologies to ensure effective communication and achievement of objectives. Additional Details Will be working on a global deployment of Azure Platform Management to 40 countries and corresponding languages, 1000 locations and 25,000 users Develop large-scale distributed software services and solutions using Azure technologies. Develop best-in-class engineering services that are well-defined, modularized, secure, reliable, configurable, flexible, diagnosable, actively monitored, and reusable Hands-on with the use of various Azure PaaS Service SDKs for .NET, .Net Core, Web API, like Storage, App Insights, Fluent API, etc. Hands-on experience with Azure App Services, Azure Serverless, Microservices on Azure, API Management, Event Hub, Logic Apps, Service Bus & Message Queues, Azure Storage, Key Vaults and Application Insights, Azure Jobs, Databricks, Notebooks, PySpark Scripting etc. Hands-on experience with Azure DevOps building CI/CD, Azure support, Code management branching, etc. Good knowledge of programming and querying SQL Server databases Experience on writing automated test cases and different automated testing frameworks (.NUnit etc.) Ensure comprehensive test coverage to validate the functionality and performance of developed solutions Performs tasks within planned durations and established deadlines. Collaborates with teams to ensure effective communication in supporting the achievement of objectives. Strong Ability to debug and resolve issues/defects Author technical approach and design documentation Collaborate with the offshore team on design discussions and development items Minimum Qualifications Experience in designing & building a solution using Azure Declarative & Programmatic Approach. Experience with integration middleware and Enterprise Service Bus Experience in consuming or sending the message in XML\JSON format to 3rd party using SOAP and REST APIs Hands-on with the use of various Azure PaaS Service SDKs for .NET, .Net Core, SQL, Web API, like Storage, App Insights, Fluent API, etc Preferably 6+ years Development experience Minimum 4+ years of hands-on experience in development with Azure App Services, Azure Serverless, Microservices on Azure, API Management, Event Hub, Function Apps, Web Jobs, Service Bus & Message Queues, Azure Storage, Key Vaults and Application Insights, Azure Jobs, Databricks, Notebooks, PySpark Scripting, Runbooks etc. Experience with Azure DevOps building CI/CD, Azure support, Code management branching, Jenkins, Kubernetes, etc. Good knowledge of programming and querying SQL Server databases Experience on writing automated test cases and different automated testing frameworks (.NUnit etc.) Experience with Agile Development Must be detail oriented. Self-Motivated Learner Ability to collaborate with others. Excellent written and verbal communication skills Bachelor's degree and/or Master's degree in Computer Science or related discipline or the equivalent in education and work experience Azure Certifications Azure Fundamentals (mandatory) Azure Administrator Associate (desired) Azure Developer Associate (mandatory) This position offers an exceptional opportunity to work for a Fortune 50 industry leader. If you are selected, you will join our dynamic technology team in making a difference to our business and customers. Do you think you have what it takes? Prove it! At UPS, ambition knows no time zone. Basic Qualifications If required and where permitted by applicable law, employees must be fully vaccinated for COVID-19 by their date of hire/placement to be considered for employment. Fully vaccinated means two weeks after receiving the second shot for Pfizer and Moderna, or two weeks after Johnson & Johnson Other Criteria UPS is an equal opportunity employer. UPS does not discriminate on the basis of race/color/religion/sex/national origin/veteran/disability/age/sexual orientation/gender identity or any other characteristic protected by law. Type De Contrat en CDI Chez UPS, égalité des chances, traitement équitable et environnement de travail inclusif sont des valeurs clefs auxquelles nous sommes attachés.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France