Jobs
Interviews

6350 Airflow Jobs - Page 48

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Key Responsibilities: · Azure Cloud s Databricks: o Design and build efficient data pipelines using Azure Databricks (PySpark). o Implement business logic for data transformation and enrichment at scale. o Manage and optimize Delta Lake storage solutions. · API Development: o Develop REST APIs using FastAPI to expose processed data. o Deploy APIs on Azure Functions for scalable and serverless data access. · Data Orchestration s ETL: o Develop and manage Airflow DAGs to orchestrate ETL processes. o Ingest and process data from various internal and external sources on a scheduled basis. · Database Management: o Handle data storage and access using PostgreSQL and MongoDB. o Write optimized SQL queries to support downstream applications and analytics. · Collaboration: o Work cross-functionally with teams to deliver reliable, high-performance data solutions. o Follow best practices in code quality, version control, and documentation. Required Skills s Experience: · 5+ years of hands-on experience as a Data Engineer. · Strong experience with Azure Cloud services. · Proficient in Azure Databricks, PySpark, and Delta Lake. · Solid experience with Python and FastAPI for API development. · Experience with Azure Functions for serverless API deployments. · Skilled in managing ETL pipelines using Apache Airflow. · Hands-on experience with PostgreSQL and MongoDB. · Strong SQL skills and experience handling large datasets.

Posted 3 weeks ago

Apply

8.0 - 12.0 years

15 - 27 Lacs

Pune, Bengaluru

Hybrid

Role & responsibilities Job Description - Snowflake Senior Developer Experience: 8+ years, Hybrid Employment Type: Full-time Job Summary We are seeking a skilled Snowflake Developer with 8+ years of experience in designing, developing, and optimizing Snowflake data solutions. The ideal candidate will have strong expertise in Snowflake SQL, ETL/ELT pipelines, and cloud data integration. This role involves building scalable data warehouses, implementing efficient data models, and ensuring high-performance data processing in Snowflake. Key Responsibilities 1. Snowflake Development & Optimization Design and develop Snowflake databases, schemas, tables, and views following best practices. Write complex SQL queries, stored procedures, and UDFs for data transformation. Optimize query performance using clustering, partitioning, and materialized views. Implement Snowflake features (Time Travel, Zero-Copy Cloning, Streams & Tasks). 2. Data Pipeline Development Build and maintain ETL/ELT pipelines using Snowflake, Snowpark, Python, or Spark. Integrate Snowflake with cloud storage (S3, Blob) and data ingestion tools (Snowpipe). Develop CDC (Change Data Capture) and real-time data processing solutions. 3. Data Modeling & Warehousing Design star schema, snowflake schema, and data vault models in Snowflake. Implement data sharing, secure views, and dynamic data masking. Ensure data quality, consistency, and governance across Snowflake environments. 4. Performance Tuning & Troubleshooting Monitor and optimize Snowflake warehouse performance (scaling, caching, resource usage). Troubleshoot data pipeline failures, latency issues, and query bottlenecks. Work with DevOps teams to automate deployments and CI/CD pipelines. 5. Collaboration & Documentation Work closely with data analysts, BI teams, and business stakeholders to deliver data solutions. Document data flows, architecture, and technical specifications. Mentor junior developers on Snowflake best practices. Required Skills & Qualifications 8+ years in database development, data warehousing, or ETL. 4+ years of hands-on Snowflake development experience. Strong SQL or Python skills for data processing. Experience with Snowflake utilities (SnowSQL, Snowsight, Snowpark). Knowledge of cloud platforms (AWS/Azure) and data integration tools (Coalesce, Airflow, DBT). Certifications: SnowPro Core Certification (preferred). Preferred Skills Familiarity with data governance and metadata management. Familiarity with DBT, Airflow, SSIS & IICS Knowledge of CI/CD pipelines (Azure DevOps). If interested, Kindly share update cv on- Himanshu.mehra@thehrsolutions.in

Posted 3 weeks ago

Apply

8.0 - 13.0 years

32 - 35 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Lead AWS Data Engineer with team handling exp: Skills AWS, Python, Sql, Spark, Airflow, Athena, Api Integration, Notice Period-imm to 15days, location: Bangalore/Hyderabad/Coimbatore & Chennai

Posted 3 weeks ago

Apply

4.0 - 9.0 years

11 - 17 Lacs

Bengaluru

Work from Office

Greetings from TSIT Digital !! This is with regard to an excellent opportunity with us and if you have that unique and unlimited passion for building world-class enterprise software products that turn into actionable intelligence, then we have the right opportunity for you and your career. This is an opportunity for Permanent Employment with TSIT Digital. What are we looking for: Data Engineer Experience: 4+ Year's Relevant Experience 2-5 Years Location:Bangalore Notice period: Immediately to 15 days Job Description: Work location-Manyata Tech Park, Bengaluru, Karnataka, India Work mode- Hybrid Model Client- Lowes Mandatory Skills: Data Engineer Scala/Python, SQL,Scripting Knowledge on BigQuery, Pyspark, Airflow,Serverless Cloud Native Service, Kafka Streaming If you are interested please share your updated CV:- kousalya.v@tsit.co.in

Posted 3 weeks ago

Apply

15.0 years

0 Lacs

Pune, Maharashtra, India

On-site

HCL Software (hcl-software.com) delivers software that fulfils the transformative needs of clients around the world. We build award winning software across AI, Automation, Data & Analytics, Security and Cloud. The HCL Unica+ Marketing Platform enables our customers to deliver precision and high performance Marketing campaigns across multiple channels like Social Media, AdTech Platforms, Mobile Applications, Websites, etc. The Unica+ Marketing Platform is a Data and AI first platform that enables our clients to deliver hyper-personalized offers and messages for customer acquisition, product awareness and retention. We are seeking a Senior Architect Developer with strong Data Science and Machine Learning skills and experience to deliver AI driven Marketing Campaigns. Responsibilities Designing and Architecting End-to-End AI/ML Solutions for Marketing: The architect is responsible for designing robust, scalable, and secure AI/ML solutions specifically tailored for marketing challenges. This includes defining data pipelines, selecting appropriate machine learning algorithms and frameworks (e.g., for predictive analytics, customer segmentation, personalization, campaign optimization, sentiment analysis), designing model deployment strategies, and integrating these solutions seamlessly with existing marketing tech stacks and enterprise systems. They must consider the entire lifecycle from data ingestion to model monitoring and retraining. Technical Leadership: The AI/ML architect acts as a technical leader, providing guidance and mentorship to data scientists, ML engineers, and other development teams. They evaluate and select the most suitable AI/ML tools, platforms, and cloud services (AWS, GCP, Azure) for marketing use cases. The architect is aso responsible for establishing and promoting best practices for MLOps (Machine Learning Operations), model versioning, continuous integration/continuous deployment (CI/CD) for ML models, and ensuring data quality, ethical AI principles (e.g., bias, fairness), and regulatory compliance (e.g., data privacy laws). Python Programming & Libraries: Proficient in Python with extensive experience using Pandas for data manipulation, NumPy for numerical operations, and Matplotlib/Seaborn for data visualization. Statistical Analysis & Modelling: Strong understanding of statistical concepts, including descriptive statistics, inferential statistics, hypothesis testing, regression analysis, and time series analysis. Data Cleaning & Preprocessing: Expertise in handling messy real-world data, including dealing with missing values, outliers, data normalization/standardization, feature engineering, and data transformation. SQL & Database Management: Ability to query and manage data efficiently from relational databases using SQL, and ideally some familiarity with NoSQL databases. Exploratory Data Analysis (EDA): Skill in visually and numerically exploring datasets to understand their characteristics, identify patterns, anomalies, and relationships. Machine Learning Algorithms: In-depth knowledge and practical experience with a wide range of ML algorithms such as linear models, tree-based models (Random Forests, Gradient Boosting), SVMs, K-means, and dimensionality reduction techniques (PCA). Deep Learning Frameworks: Proficiency with at least one major deep learning framework like TensorFlow or PyTorch. This includes understanding neural network architectures (CNNs, RNNs, Transformers) and their application to various problems. Model Evaluation & Optimization: Ability to select appropriate evaluation metrics (e.g., precision, recall, F1-score, AUC-ROC, RMSE) for different problem types, diagnose model performance issues (bias-variance trade-off), and apply optimization techniques. Deployment & MLOps Concepts: Deploy machine learning models into production environments, including concepts of API creation, containerization (Docker), version control for models, and monitoring. Qualifications & Skills At least 15+ years of Experience across Data Architecture, Data Science and Machine Learning. Experience in delivering AI/ML models for Marketing Outcomes like Customer Acquisition, Customer Churn, Next Best Product or Offer. This is a mandatory requirement. Experience with Customer Data Platforms (CDP) and Marketing Platforms like Unica, Adobe, SalesForce, Braze, TreasureData, Epsilon, Tealium is mandatory. Experience with AWS SageMaker is advantageous Experience with LangChain, RAG for Generative AI is advantageous. Experience with ETL process and tools like Apache Airflow is advantageous Expertise in Integration tools and frameworks like Postman, Swagger, API Gateways Ability to work well within an agile team environment and apply the related working methods. Excellent communication & interpersonal skills A 4-year degree in Computer Science or IT is a must. Travel: 30% +/- travel required

Posted 3 weeks ago

Apply

170.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

About Us: Birlasoft, a global leader at the forefront of Cloud, AI, and Digital technologies, seamlessly blends domain expertise with enterprise solutions. The company’s consultative and design-thinking approach empowers societies worldwide, enhancing the efficiency and productivity of businesses. As part of the multibillion-dollar diversified CKA Birla Group, Birlasoft with its 12,000+ professionals, is committed to continuing the Group’s 170-year heritage of building sustainable communities. Job Description - Snowflake Tech Lead Experience: 10+ years Location: Mumbai, Pune, Hyderabad Employment Type: Full-time Job Summary We are looking for a Snowflake Tech Lead with 10+ years of experience in data engineering, cloud platforms, and Snowflake implementations. This role involves leading technical teams, designing scalable Snowflake solutions, and optimizing data pipelines for performance and efficiency. The ideal candidate will have deep expertise in Snowflake, ETL/ELT processes, and cloud data architecture. Key Responsibilities 1. Snowflake Development & Optimization Lead Snowflake implementation, including data modeling, warehouse design, and performance tuning. Optimize SQL queries, stored procedures, and UDFs for high efficiency. Implement Snowflake best practices (clustering, partitioning, zero-copy cloning). Manage virtual warehouses, resource monitors, and cost optimization. 2. Data Pipeline & Integration Design and deploy ETL/ELT pipelines using Snowflake, Snowpark, Coalesce. Integrate Snowflake with BI tools (Power BI, Tableau), APIs, and external data sources. Implement real-time and batch data ingestion (CDC, streaming, Snowpipe). 3. Team Leadership & Mentorship Lead a team of data engineers, analysts, and developers in Snowflake projects. Conduct code reviews, performance tuning sessions, and technical training. Collaborate with stakeholders, architects, and business teams to align solutions with requirements. 4. Security & Governance Configure RBAC, data masking, encryption, and row-level security in Snowflake. Ensure compliance with GDPR, HIPAA, or SOC2 standards. Implement data quality checks, monitoring, and alerting. 5. Cloud & DevOps Integration Deploy Snowflake in AWS, Azure Automate CI/CD pipelines for Snowflake using GitHub Actions, Jenkins, or Azure DevOps. Monitor and troubleshoot Snowflake environments using logging tools (Datadog, Splunk). Required Skills & Qualifications 10+ years in data engineering, cloud platforms, or database technologies. 5+ years of hands-on Snowflake development & administration. Strong expertise in SQL, Python for data processing. Experience with Snowflake features (Snowpark, Streams & Tasks, Time Travel). Knowledge of cloud data storage (S3, Blob) and data orchestration (Airflow, DBT). Certifications: Snowflake SnowPro Core/Advanced. Knowledge of DataOps, MLOps, and CI/CD pipelines. Familiarity with DBT, Airflow, SSIS & IICS

Posted 3 weeks ago

Apply

4.0 - 6.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Experience- 4-6Yrs Location ( Mumbai- Thane) Only Immediate joiners Key Responsibilities Database Engineering & Operations Own and manage critical components of the database infrastructure across production and non-production environments. Ensure performance, availability, scalability, and reliability of databases including PostgreSQL, MySQL, and MongoDB Drive implementation of best practices in schema design, indexing, query optimization, and database tuning. Take initiative in root cause analysis and resolution of complex performance and availability issues. Implement and maintain backup, recovery, and disaster recovery procedures; contribute to testing and continuous improvement of these systems. Ensure system health through robust monitoring, alerting, and observability using tools such as Prometheus, Grafana, and CloudWatch. Implement and improve automation for provisioning, scaling, maintenance, and monitoring tasks using scripting (e.g., Python, Bash). Database Security & Compliance Enforce database security best practices, including encryption at-rest and in-transit, IAM/RBAC, and audit logging. Support data governance and compliance efforts related to SOC 2, ISO 27001, or other regulatory standards. Collaborate with the security team on regular vulnerability assessments and hardening initiatives. DevOps & Collaboration Partner with DevOps and Engineering teams to integrate database operations into CI/CD pipelines using tools like Liquibase, Flyway, or custom scripting. Participate in infrastructure-as-code workflows (e.g., Terraform) for consistent and scalable DB provisioning and configuration. Proactively contribute to cross-functional planning, deployments, and system design sessions with engineering and product teams. Required Skills & Experience 4-6 years of production experience managing relational and NoSQL databases in cloud-native environments (AWS, GCP, or Azure). Proficiency in: Relational Databases: PostgreSQL and/or MySQL NoSQL Databases: MongoDB (exposure to Cassandra or DynamoDB is a plus) Deep hands-on experience in performance tuning, query optimization, and troubleshooting live systems. Strong scripting ability (e.g., Python, Bash) for automation of operational tasks. Experience in implementing monitoring and alerting for distributed systems using Grafana, Prometheus, or equivalent cloud-native tools. Understanding of security and compliance principles and how they apply to data systems. Ability to operate with autonomy while collaborating in fast-paced, cross-functional teams. Strong analytical, problem-solving, and communication skills. Nice to Have (Bonus) Experience with Infrastructure as Code tools (Terraform, Pulumi, etc.) for managing database infrastructure. Familiarity with Kafka, Airflow, or other data pipeline tools. Experience working in multi-region or multi-cloud environments with high availability requirements. Exposure to analytics databases (e.g., Druid, ClickHouse, BigQuery, Vertica Db) or search platforms like Elasticsearch. Participation in on-call rotations and contribution to incident response processes.

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Company Description At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Job Description This role will be part of a team that develops software that processes data captured every day from over a quarter of a million Computer and Mobile devices worldwide. Measuring panelists activities as they surf the Internet via Browsers, or utilizing Mobile App’s download from Apple’s and Google’s store. The Nielsen software meter used to capture this usage data has been optimized to be unobtrusive yet gather many biometric data points that the backend system can use to identify who is using the device, and also detect fraudulent behavior. The Software Engineer is ultimately responsible for delivering technical solutions: starting from the project's onboard until post launch support and including design, development, testing. It is expected to coordinate, support and work with multiple delocalized project teams in multiple regions. As a member of the technical staff with our Digital Meter Processing team, you will further develop the backend system that processes massive amounts of data every day, across 3 different AWS regions. Your role will involve designing, implementing, and maintaining robust, scalable solutions that leverage a Java based system that runs in an AWS environment. You will play a key role in shaping the technical direction of our projects and mentoring other team members. Qualifications Responsibilities System Deployment: Conceive, design and build new features in the existing backend processing pipelines. CI/CD Implementation: Design and implement CI/CD pipelines for automated build, test, and deployment processes. Ensure continuous integration and delivery of features, improvements, and bug fixes. Code Quality and Best Practices: Enforce coding standards, best practices, and design principles. Conduct code reviews and provide constructive feedback to maintain high code quality. Performance Optimization: Identify and address performance bottlenecks in both reading, processing and writing data to the backend data stores. Mentorship and Collaboration: Mentor junior engineers, providing guidance on technical aspects and best practices. Collaborate with cross-functional teams to ensure a cohesive and unified approach to software development. Security and Compliance: Implement security best practices for all tiers of the system. Ensure compliance with industry standards and regulations related to AWS platform security. Key Skills Bachelor's or Master’s degree in Computer Science, Software Engineering, or a related field. Proven experience, minimum 3 years, in high-volume data processing development expertise using ETL tools such as AWS Glue or PySpark, Java, SQL and databases such as Postgres Minimum 2 years development on an AWS platform Strong understanding of CI/CD principles and tools. GitLab a plus Excellent problem-solving and debugging skills. Strong communication and collaboration skills with ability to communicate complex technical concepts and align organization on decisions Sound problem-solving skills with the ability to quickly process complex information and present it clearly and simply Utilizes team collaboration to create innovative solutions efficiently Other Desirable Skills Knowledge of networking principles and security best practices. AWS certifications Experience with Data Warehouses, ETL, and/or Data Lakes very desirable Experience with RedShift, Airflow, Python, Lambda, Prometheus, Grafana, & OpsGeni a bonus Exposure to the Google Cloud Platform (GCP) Additional Information Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels.

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Atos Atos is a global leader in digital transformation with c. 78,000 employees and annual revenue of c. € 10 billion. European number one in cybersecurity, cloud and high-performance computing, the Group provides tailored end-to-end solutions for all industries in 68 countries. A pioneer in decarbonization services and products, Atos is committed to a secure and decarbonized digital for its clients. Atos is a SE (Societas Europaea) and listed on Euronext Paris. The purpose of Atos is to help design the future of the information space. Its expertise and services support the development of knowledge, education and research in a multicultural approach and contribute to the development of scientific and technological excellence. Across the world, the Group enables its customers and employees, and members of societies at large to live, work and develop sustainably, in a safe and secure information space. Data Streaming Engineer - Experience 4+ Years. Expertise in Python Language is MUST. SQL (should be able to write complex SQL Queries) is MUST Hands on experience in Apache Flink Streaming Or Spark Streaming MUST Hands On expertise in Apache Kafka experience is MUST Data Lake Development experience. Orchestration (Apache Airflow is preferred). Spark and Hive Optimization of Spark/PySpark and Hive apps Trino/(AWS Athena) (Good to have) Snowflake (good to have). Data Quality (good to have). File Storage (S3 is good to have) Our Offering - Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences. Attractive Salary. Hybrid work culture.

Posted 3 weeks ago

Apply

0 years

0 Lacs

India

Remote

We are hiring a Data Engineer. If you are interested, please feel free to share your CV to SyedaRashna@lancesoft.com Job title: Data Engineer Location: India - Remote Duration: 6 Months Description: We are seeking a highly skilled and motivated Data Engineer to join our dynamic technology team. The ideal candidate will have deep expertise in data engineering tools and platforms, particularly Apache Airflow, PySpark, and Python, with hands-on experience in Cloudera Data Platform (CDP). A strong understanding of DevOps practices and exposure to AI/ML and Generative AI use cases is highly desirable. Key Responsibilities: 1. Design, build, and maintain scalable data pipelines using Python, PySpark and Airflow. 2. Develop and optimize ETL workflows on Cloudera Data Platform (CDP). 3. Implement data quality checks, monitoring, and alerting mechanisms. 4. Ensure data security, governance, and compliance across all pipelines. 5 Work closely with cross-functional teams to understand data requirements and deliver solutions. 6. Troubleshoot and resolve issues in production data pipelines. 7. Contribute to the architecture and design of the data platform. 8. Collaborate with engineering teams and analysts to work on AI/ML and Gen AI use cases. 9. Automate deployment and monitoring of data workflows using DevOps tools and practices. 10. Stay updated with the latest trends in data engineering, AI/ML, and Gen AI technologies.

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

India

Remote

Data Engineer Remote 7 Months Contract + Extendable Experience: 6 Years We are seeking a highly skilled and motivated Data Engineer to join our dynamic technology team. The ideal candidate will have deep expertise in data engineering tools and platforms, particularly Apache Airflow, PySpark, and Python, with hands-on experience in Cloudera Data Platform (CDP). A strong understanding of DevOps practices and exposure to AI/ML and Generative AI use cases is highly desirable. Key Responsibilities: 1. Design, build, and maintain scalable data pipelines using Python, PySpark and Airflow. 2. Develop and optimize ETL workflows on Cloudera Data Platform (CDP). 3. Implement data quality checks, monitoring, and alerting mechanisms. 4. Ensure data security, governance, and compliance across all pipelines. 5.Work closely with cross-functional teams to understand data requirements and deliver solutions. 6. Troubleshoot and resolve issues in production data pipelines. 7. Contribute to the architecture and design of the data platform. 8. Collaborate with engineering teams and analysts to work on AI/ML and Gen AI use cases. 9. Automate deployment and monitoring of data workflows using DevOps tools and practices. 10. Stay updated with the latest trends in data engineering, AI/ML, and Gen AI technologies.

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Data Engineer - Azure Databricks, Pyspark, Python, Airflow __Chennai/Pune India ( 6- 10 years exp only) YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a Junior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Spark, Scala, Pyspark, Databricks, Airflow, SQL, Docker, Kubernetes, and other Data engineering tools. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Jenkins and Bitbucket/GIT Hub. Responsibilities Develop, test, troubleshoot, debug, and make application enhancements leveraging, Spark , Pyspark, Scala, Pandas, Databricks, Airflow, SQL as the core development technologies. Deploy application components using CI/CD pipelines. Build utilities for monitoring and automating repetitive functions. Collaborate with Agile cross-functional teams - internal and external clients including Operations, Infrastructure, Tech Ops Collaborate with Data Science team and productionize the ML Models. Participate in a rotational support schedule to provide responses to customer queries and deploy bug fixes in a timely and accurate manner. Qualifications 6-10 Years of years of applicable software engineering experience Strong fundamentals with experience in Bigdata technologies, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL, Must have experience in cloud technologies, preferably Microsoft Azure. Must have experience in performance optimization of Spark workloads. Good to have experience with DevOps Technologies as GIT Hub, Kubernetes, Jenkins, Docker. Good to have knowledge in Snowflakes Good to have knowledge of relational databases, preferably PostgreSQL. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description Data Engineer - Azure Databricks, Pyspark, Python, Airflow __Chennai/Pune India ( 3- 6 years exp only) YOU’LL BUILD TECH THAT EMPOWERS GLOBAL BUSINESSES Our Connect Technology teams are working on our new Connect platform, a unified, global, open data ecosystem powered by Microsoft Azure. Our clients around the world rely on Connect data and insights to innovate and grow. As a Junior Data Engineer, you’ll be part of a team of smart, highly skilled technologists who are passionate about learning and supporting cutting-edge technologies such as Spark, Scala, Pyspark, Databricks, Airflow, SQL, Docker, Kubernetes, and other Data engineering tools. These technologies are deployed using DevOps pipelines leveraging Azure, Kubernetes, Jenkins and Bitbucket/GIT Hub. Responsibilities Develop, test, troubleshoot, debug, and make application enhancements leveraging, Spark , Pyspark, Scala, Pandas, Databricks, Airflow, SQL as the core development technologies. Deploy application components using CI/CD pipelines. Build utilities for monitoring and automating repetitive functions. Collaborate with Agile cross-functional teams - internal and external clients including Operations, Infrastructure, Tech Ops Collaborate with Data Science team and productionize the ML Models. Participate in a rotational support schedule to provide responses to customer queries and deploy bug fixes in a timely and accurate manner. Qualifications 3-6 Years of years of applicable software engineering experience Strong fundamentals with experience in Bigdata technologies, Spark, Pyspark, Scala, Pandas, Databricks, Airflow, SQL, Must have experience in cloud technologies, preferably Microsoft Azure. Must have experience in performance optimization of Spark workloads. Good to have experience with DevOps Technologies as GIT Hub, Kubernetes, Jenkins, Docker. Good to have knowledge in Snowflakes Good to have knowledge of relational databases, preferably PostgreSQL. Excellent English communication skills, with the ability to effectively interface across cross-functional technology teams and the business Minimum B.S. degree in Computer Science, Computer Engineering or related field Additional Information Enjoy a flexible and rewarding work environment with peer-to-peer recognition platforms. Recharge and revitalize with help of wellness plans made for you and your family. Plan your future with financial wellness tools. Stay relevant and upskill yourself with career development opportunities Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion

Posted 3 weeks ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About Client: Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Description Python API / FAST API Developer Location : Hyderabad Who are we looking for? We are seeking a Python Developer with strong expertise in Python and Databases & hands-on experience in Azure cloud technologies. The role will focus on migrating processes from the current 3 rd Party RPA modules to Apache Airflow modules, ensuring seamless orchestration and automation of workflows. The ideal candidate will bring technical proficiency, problem-solving skills, and a deep understanding of workflow automation, along with a strong grasp of the North America insurance industry processes . Technical Skills: · Design, develop, and implement workflows using Apache Airflow to replace the current 3 rd Party RPA modules. · Build and optimize Python scripts to enable automation and integration with Apache Airflow pipelines. · Leverage Azure cloud services for deployment, monitoring, and scaling of Airflow workflows. · Collaborate with cross-functional teams to understand existing processes, dependencies, and business objectives. · Lead the migration of critical processes such as Auto, Package, Work Order Processing, and Policy Renewals within CI, Major Accounts, and Middle Market LOBs. · Ensure the accuracy, efficiency, and scalability of new workflows post-migration. · Perform unit testing, troubleshooting, and performance tuning for workflows and scripts. · Document workflows, configurations, and technical details to maintain clear and comprehensive project records. · Mentor junior developers and share best practices for Apache Airflow and Python development Responsibilities · Proficiency in Python programming for API Development, Scripting, Data transformation, and Process Automation & Database interactions. · Hands-on experience in Azure cloud technologies (e.g., Azure Data Factory, Azure DevOps, Azure Storage). · Proven experience in migrating and automating processes from legacy systems or RPA modules. · Strong analytical and problem-solving skills with attention to detail. · Excellent communication and documentation skills . Process Skills: · Experience working with Auto, Package, Work Order Processing, and Policy Renewals . · Familiarity with Commercial Insurance (CI), Major Accounts, and Middle Market LOBs in the North America insurance industry. · Understanding of RPA processes and architecture .

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

About Company : They balance innovation with an open, friendly culture and the backing of a long-established parent company, known for its ethical reputation. We guide customers from what’s now to what’s next by unlocking the value of their data and applications to solve their digital challenges, achieving outcomes that benefit both business and society. About Client: Our client is a global digital solutions and technology consulting company headquartered in Mumbai, India. The company generates annual revenue of over $4.29 billion (₹35,517 crore), reflecting a 4.4% year-over-year growth in USD terms. It has a workforce of around 86,000 professionals operating in more than 40 countries and serves a global client base of over 700 organizations. Our client operates across several major industry sectors, including Banking, Financial Services & Insurance (BFSI), Technology, Media & Telecommunications (TMT), Healthcare & Life Sciences, and Manufacturing & Consumer. In the past year, the company achieved a net profit of $553.4 million (₹4,584.6 crore), marking a 1.4% increase from the previous year. It also recorded a strong order inflow of $5.6 billion, up 15.7% year-over-year, highlighting growing demand across its service lines. Key focus areas include Digital Transformation, Enterprise AI, Data & Analytics, and Product Engineering—reflecting its strategic commitment to driving innovation and value for clients across industries. Job Title: Python Developer Location: Hyderabad Experience: 5–10 Years Employment Type: Contract Who Are We Looking For? We are seeking a Python Developer with solid experience in Python, Database interactions, and Azure Cloud technologies . The role involves migrating 3rd-party RPA modules to Apache Airflow , enabling seamless orchestration and automation of business workflows. Knowledge of North America insurance industry processes is highly desirable. Key Responsibilities: Design, develop, and implement workflows using Apache Airflow to replace existing 3rd-party RPA modules. Develop and optimize Python scripts for automation and integration with Airflow pipelines. Deploy, monitor, and scale Airflow workflows using Azure Cloud services . Collaborate with business and technical teams to understand current systems, dependencies, and objectives. Migrate critical insurance processes like Auto, Package, Work Order Processing, and Policy Renewals for Commercial Insurance (CI), Major Accounts, and Middle Market LOBs. Ensure accuracy, efficiency, and scalability of workflows post-migration. Conduct unit testing , debugging, and performance tuning. Document workflows, configurations, and technical designs for knowledge sharing. Mentor junior developers and promote best practices in Python and Airflow development. Required Technical Skills: Strong Python programming skills – for API development, scripting, data transformation, automation, and database interaction. Hands-on experience with Apache Airflow – workflow orchestration and DAG development. Proficiency with Azure Cloud services , such as: Azure Data Factory Azure DevOps Azure Blob/Storage Experience migrating legacy systems or 3rd-party RPA modules to modern workflow automation tools. Strong understanding of database concepts and interaction (SQL/NoSQL). Familiarity with insurance domain processes is a plus.

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Gachibowli, Hyderabad, Telangana

On-site

Staff Data Engineer Location: Gachibowli Hyderabad, TG, IN Company: Goodyear Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Katrena Calimag-Rupera Sponsorship Available: No Relocation Assistance Available: No STAFF DIGITAL SOFTWARE ENGINEER – Data Engineer Are you interested in an exciting opportunity to help shape the user experience and design front-end applications for data-driven digital products that drive better process performance across a global company? The Data Driven Engineering and Global Information Technology groups group at the Goodyear Technology India Center, Hyderabad, India is looking for a dynamic individual with strong background in data engineering and infrastructure to partner with data scientists, information technology specialists as well as our global technology and operations teams to derive valuable insights from our expansive data sources and help develop data-driven solutions for important business applications across the company. Since its inception, the Data Science portfolio of projects continues to grow and includes areas of tire manufacturing, operations, business, and technology. The people in our Data Science group come from a broad range of backgrounds: Mathematics, Statistics, Cognitive Linguistics, Astrophysics, Biology, Computer Science, Mechanical, Electrical, Chemical, and Industrial Engineering, and of course - Data Science. This diverse group works together to develop innovative tools and methods for simulating, modeling, and analyzing complex processes throughout our company. We’d like you to help us build the next generation of data-driven applications for the company and be a part of the Information Technology and Data Driven Engineering teams. What You Will Do We think you’ll be excited about having opportunities to: Design and build robust, scalable, and efficient data pipelines and ETL processes to support analytics, data science, and digital products. Collaborate with cross-functional teams to understand data requirements and implement solutions that integrate data from diverse sources. Lead the development, management, and optimization of cloud-based data infrastructure using platforms such as AWS, Azure, or GCP. Architect and maintain highly available and performant relational database systems (e.g., PostgreSQL, MySQL) and NoSQL systems (e.g., MongoDB, DynamoDB). Partner with data scientists to ensure efficient and secure data access for modeling, experimentation, and production deployment. Build and maintain data services and APIs to facilitate access to curated datasets across internal applications and teams. Implement DevOps and DataOps practices including CI/CD for data workflows, infrastructure as code, containerization (Docker), and orchestration (Kubernetes). Learn about the tire industry and tire manufacturing processes from subject matter experts. Be a part of cross-functional teams working together to deliver impactful results. What We Expect Bachelor’s degree in computer science or a similar technical field; preferred: Master’s degree in computer science or a similar field 5 or more years of experience designing and maintaining data pipelines, cloud-based data systems, and production-grade data workflows. Experience with the following technology groups: Strong experience in Python, Java, or other languages for data engineering and scripting. Deep knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, DynamoDB), including query optimization and schema design. Experience designing and deploying solutions on cloud platforms like AWS (e.g., S3, Redshift, RDS), Azure, or GCP. Familiarity with data modeling, data warehousing, and distributed data processing frameworks (e.g., Apache Spark, Airflow, dbt). Understanding of RESTful APIs and integration of data services with applications. Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins), Docker, Kubernetes, and infrastructure-as-code frameworks. Solid grasp of software engineering best practices, including code versioning, testing, and performance optimization. Good teamwork skills - ability to work in a team environment and deliver results on time. Strong communication skills - capable of conveying information concisely to diverse audiences. Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 68,000 people and manufactures its products in 53 facilities in 20 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate Job Segment: Test Engineer, R&D Engineer, Software Engineer, Cloud, Computer Science, Engineering, Technology

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

Gachibowli, Hyderabad, Telangana

On-site

Location: IN - Hyderabad Telangana Goodyear Talent Acquisition Representative: Katrena Calimag-Rupera Sponsorship Available: No Relocation Assistance Available: No STAFF DIGITAL SOFTWARE ENGINEER – Data Engineer Are you interested in an exciting opportunity to help shape the user experience and design front-end applications for data-driven digital products that drive better process performance across a global company? The Data Driven Engineering and Global Information Technology groups group at the Goodyear Technology India Center, Hyderabad, India is looking for a dynamic individual with strong background in data engineering and infrastructure to partner with data scientists, information technology specialists as well as our global technology and operations teams to derive valuable insights from our expansive data sources and help develop data-driven solutions for important business applications across the company. Since its inception, the Data Science portfolio of projects continues to grow and includes areas of tire manufacturing, operations, business, and technology. The people in our Data Science group come from a broad range of backgrounds: Mathematics, Statistics, Cognitive Linguistics, Astrophysics, Biology, Computer Science, Mechanical, Electrical, Chemical, and Industrial Engineering, and of course - Data Science. This diverse group works together to develop innovative tools and methods for simulating, modeling, and analyzing complex processes throughout our company. We’d like you to help us build the next generation of data-driven applications for the company and be a part of the Information Technology and Data Driven Engineering teams. What You Will Do We think you’ll be excited about having opportunities to: Design and build robust, scalable, and efficient data pipelines and ETL processes to support analytics, data science, and digital products. Collaborate with cross-functional teams to understand data requirements and implement solutions that integrate data from diverse sources. Lead the development, management, and optimization of cloud-based data infrastructure using platforms such as AWS, Azure, or GCP. Architect and maintain highly available and performant relational database systems (e.g., PostgreSQL, MySQL) and NoSQL systems (e.g., MongoDB, DynamoDB). Partner with data scientists to ensure efficient and secure data access for modeling, experimentation, and production deployment. Build and maintain data services and APIs to facilitate access to curated datasets across internal applications and teams. Implement DevOps and DataOps practices including CI/CD for data workflows, infrastructure as code, containerization (Docker), and orchestration (Kubernetes). Learn about the tire industry and tire manufacturing processes from subject matter experts. Be a part of cross-functional teams working together to deliver impactful results. What We Expect Bachelor’s degree in computer science or a similar technical field; preferred: Master’s degree in computer science or a similar field 5 or more years of experience designing and maintaining data pipelines, cloud-based data systems, and production-grade data workflows. Experience with the following technology groups: Strong experience in Python, Java, or other languages for data engineering and scripting. Deep knowledge of relational databases (e.g., PostgreSQL, MySQL) and NoSQL databases (e.g., MongoDB, DynamoDB), including query optimization and schema design. Experience designing and deploying solutions on cloud platforms like AWS (e.g., S3, Redshift, RDS), Azure, or GCP. Familiarity with data modeling, data warehousing, and distributed data processing frameworks (e.g., Apache Spark, Airflow, dbt). Understanding of RESTful APIs and integration of data services with applications. Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins), Docker, Kubernetes, and infrastructure-as-code frameworks. Solid grasp of software engineering best practices, including code versioning, testing, and performance optimization. Good teamwork skills - ability to work in a team environment and deliver results on time. Strong communication skills - capable of conveying information concisely to diverse audiences. Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 68,000 people and manufactures its products in 53 facilities in 20 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate

Posted 3 weeks ago

Apply

0.0 - 40.0 years

0 Lacs

Gurugram, Haryana

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Software Engineer-MLOps We are seeking an enthusiastic and detail-oriented MLOps Engineer to support the development, deployment, and monitoring of machine learning models in production environments. This is a hands-on role ideal for candidates looking to grow their skills at the intersection of data science, software engineering, and DevOps. You will work closely with senior MLOps engineers, data scientists, and software developers to build scalable, reliable, and automated ML workflows across cloud platforms like AWS and Azure Key Responsibilities include: Assist in building and maintaining ML pipelines for data preparation, training, testing, and deployment Support the automation of model lifecycle tasks including versioning, packaging, and monitoring Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Assist with containerizing ML models using Docker, and deploying using Kubernetes or cloud-native orchestrators Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Participate in implementing CI/CD pipelines for ML workflows using GitHub Actions, Azure DevOps, or Jenkins Contribute to testing frameworks for ML models and data validation (e.g., pytest, Great Expectations). Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Participate in diagnosing issues related to model accuracy, latency, or infrastructure bottlenecks Continuously improve knowledge of MLOps tools, ML frameworks, and cloud practices. Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 7 years in Devops, with 2+ years in MLOps. Good Understanding of MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Proficient in Python and familiar with bash scripting Exposure to MLOps platforms or tools such as SageMaker Studio, Azure ML, or GCP Vertex AI. Requisition ID: 610751 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 3 weeks ago

Apply

0.0 - 40.0 years

0 Lacs

Gurugram, Haryana

On-site

Additional Locations: India-Haryana, Gurgaon Diversity - Innovation - Caring - Global Collaboration - Winning Spirit - High Performance At Boston Scientific, we’ll give you the opportunity to harness all that’s within you by working in teams of diverse and high-performing employees, tackling some of the most important health industry challenges. With access to the latest tools, information and training, we’ll help you in advancing your skills and career. Here, you’ll be supported in progressing – whatever your ambitions. Senior Software Engineer-MLOps We are looking for a highly skilled Senior Software Engineer – MLOps with deep expertise in building and managing production-grade ML pipelines in AWS and Azure cloud environments. This role requires a strong foundation in software engineering, DevOps principles, and ML model lifecycle automation to enable reliable and scalable machine learning operations across the organization Key Responsibilities include: Design and build robust MLOps pipelines for model training, validation, deployment, and monitoring Automate workflows using CI/CD tools such as GitLab Actions, Azure DevOps, Jenkins, or Argo Workflows Build and manage ML workloads on AWS (SageMaker Unified studio, Bedrock, EKS, Lambda, S3, Athena) and Azure (Azure ML Foundry, AKS, ADF, Blob Storage) Design secure and cost-efficient ML architecture leveraging cloud-native services Manage infrastructure using IaC tools such as Terraform, Bicep, or CloudFormation Implement cost optimization and performance tuning for cloud workloads Package ML models using Docker, and orchestrate deployments with Kubernetes on EKS/AKS Ensure robust CI/CD pipelines and infrastructure as code (IaC) using tools like Terraform or CloudFormation Integrate observability tools for model performance, drift detection, and lineage tracking (e.g., Fiddler, MLflow, Prometheus, Grafana, Azure Monitor, CloudWatch) Ensure model reproducibility, versioning, and compliance with audit and regulatory requirements Collaborate with data scientists, software engineers, DevOps, and cloud architects to operationalize AI/ML use cases Mentor junior MLOps engineers and evangelize MLOps best practices across teams Required Qualification: Bachelor's/Master’s in Computer Science, Engineering, or related discipline 10 years in Devops, with 2+ years in MLOps. Proficient with MLflow, Airflow, FastAPI, Docker, Kubernetes, and Git. Experience with feature stores (e.g., Feast), model registries, and experiment tracking. Proficiency in Devops & MLOps, Automation Cloud formation/Teraform/BICEP Requisition ID: 610750 As a leader in medical science for more than 40 years, we are committed to solving the challenges that matter most – united by a deep caring for human life. Our mission to advance science for life is about transforming lives through innovative medical solutions that improve patient lives, create value for our customers, and support our employees and the communities in which we operate. Now more than ever, we have a responsibility to apply those values to everything we do – as a global business and as a global corporate citizen. So, choosing a career with Boston Scientific (NYSE: BSX) isn’t just business, it’s personal. And if you’re a natural problem-solver with the imagination, determination, and spirit to make a meaningful difference to people worldwide, we encourage you to apply and look forward to connecting with you!

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Greetings from Synergy Resource Solutions, a leading Recruitment Consultancy. Our Client is an ISO 27001:2013 AND ISO 9001 Certified company, and pioneer in web design and development company from India. Company has also been voted as the Top 10 mobile app development companies in India. Company is leading IT Consulting and web solution provider for custom software, website, games, custom web application, enterprise mobility, mobile apps and cloud-based application design & development. Company is ranked one of the fastest growing web design and development company in India, with 3900+ successfully delivered projects across United States, UK, UAE, Canada and other countries. Over 95% of client retention rate demonstrates their level of services and client satisfaction. Position : Senior Data Engineer Experience : 5+ Years relevant experience Education Qualification : Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. Job Location : Ahmedabad Shift : 11 AM – 8.30 PM Key Responsibilities: Our client seeking an experienced and motivated Senior Data Engineer to join their AI & Automation team. The ideal candidate will have 5–8 years of experience in data engineering, with a proven track record of designing and implementing scalable data solutions. A strong background in database technologies, data modeling, and data pipeline orchestration is essential. Additionally, hands-on experience with generative AI technologies and their applications in data workflows will set you apart. In this role, you will lead data engineering efforts to enhance automation, drive efficiency, and deliver data driven insights across the organization. Job Description: • Design, build, and maintain scalable, high-performance data pipelines and ETL/ELT processes across diverse database platforms. • Architect and optimize data storage solutions to ensure reliability, security, and scalability. • Leverage generative AI tools and models to enhance data engineering workflows, drive automation, and improve insight generation. • Collaborate with cross-functional teams (Data Scientists, Analysts, and Engineers) to understand and deliver on data requirements. • Develop and enforce data quality standards, governance policies, and monitoring systems to ensure data integrity. • Create and maintain comprehensive documentation for data systems, workflows, and models. • Implement data modeling best practices and optimize data retrieval processes for better performance. • Stay up-to-date with emerging technologies and bring innovative solutions to the team. Qualifications: • Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. • 5–8 years of experience in data engineering, designing and managing large-scale data systems. Strong expertise in database technologies, including: The mandatory skills are as follows: SQL NoSQL (MongoDB or Cassandra, or CosmosDB) One of the following : Snowflake or Redshift or BigQuery or Microsft Fabrics Azure • Hands-on experience implementing and working with generative AI tools and models in production workflows. • Proficiency in Python and SQL, with experience in data processing frameworks (e.g., Pandas, PySpark). • Experience with ETL tools (e.g., Apache Airflow, MS Fabric, Informatica, Talend) and data pipeline orchestration platforms. • Strong understanding of data architecture, data modeling, and data governance principles. • Experience with cloud platforms (preferably Azure) and associated data services. Skills: • Advanced knowledge of Database Management Systems and ETL/ELT processes. • Expertise in data modeling, data quality, and data governance. • Proficiency in Python programming, version control systems (Git), and data pipeline orchestration tools. • Familiarity with AI/ML technologies and their application in data engineering. • Strong problem-solving and analytical skills, with the ability to troubleshoot complex data issues. • Excellent communication skills, with the ability to explain technical concepts to non-technical stakeholders. • Ability to work independently, lead projects, and mentor junior team members. • Commitment to staying current with emerging technologies, trends, and best practices in the data engineering domain. If your profile is matching with the requirement & if you are interested for this job, please share your updated resume with details of your present salary, expected salary & notice period.

Posted 3 weeks ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a Data Scientist for AI Products at Linde, you will be an integral part of the Artificial Intelligence team within Linde's global corporate division. Your primary responsibility will involve working on real business challenges and opportunities across multiple countries. Your focus will be on supporting the AI team in extending existing AI products and developing new ones to cater to various use cases within Linde's business and value chain. Collaboration with diverse teams including Project Managers, Data Scientists, Data and Software Engineers will be a key aspect of your role. You will directly work with a variety of data sources, types, and structures to derive actionable insights. Developing, customizing, and managing AI software products based on Machine and Deep Learning backends will be among your core tasks. Your role will also entail supporting the replication of existing products and pipelines to other systems and geographies, defining data requirements for new developments, and interacting with business functions to identify opportunities with potential business impact. To excel in this role, you are required to have a Bachelor's or Master's degree in data science, Computational Statistics/Mathematics, Computer Science, Operations Research, or a related field. A strong understanding and practical experience in Multivariate Statistics, Machine Learning, and Probability concepts are essential. Additionally, hands-on experience with preprocessing, feature engineering, data cleansing, and Python programming skills are crucial. Proficiency in handling large datasets using SQL, knowledge of data architectures, and experience with data visualization tools like Tableau or PowerBI are desired. Your result-driven mindset, excellent communication skills, and ability to structure projects from ideation to implementation will be valuable assets in this role. Fluency in English is a must, and any experience with DevOps, MS Azure, Azure ML, Kedro, Airflow, MLflow, or similar tools will be advantageous. Working at Linde offers you the opportunity to be part of a leading global industrial gases and engineering company with a strong commitment to sustainable development and customer success. If you are looking for a challenging yet rewarding career where you can make a positive impact on the world, Linde provides limitless possibilities for your professional growth. Join us in our mission to make the world more productive and sustainable. Let's talk about how you can contribute to our team!,

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

Join as a Big Data Engineer at Barclays and lead the evolution of the digital landscape to drive innovation and excellence. Utilize cutting-edge technology to revolutionize digital offerings and ensure unparalleled customer experiences. To succeed in this role, you should possess the following essential skills: - Full Stack Software Development for large-scale, mission-critical applications. - Proficiency in distributed big data systems like Spark, Hive, Kafka streaming, Hadoop, Airflow. - Expertise in Scala, Java, Python, J2EE technologies, Microservices, Spring, Hibernate, REST APIs. - Experience with n-tier web application development and frameworks such as Spring Boot, Spring MVC, JPA, Hibernate. - Familiarity with version control systems, particularly Git; GitHub Copilot experience is a bonus. - Proficient in API Development using SOAP or REST, JSON, and XML. - Hands-on experience in developing back-end applications with multi-process and multi-threaded architectures. - Skilled in building scalable microservices solutions using integration design patterns, Dockers, Containers, and Kubernetes. - Knowledge of DevOps practices including CI/CD, Test Automation, Build Automation using tools like Jenkins, Maven, Chef, Git, Docker. - Experience with data processing in cloud environments like Azure or AWS. - Essential experience in Data Product development and Agile development methodologies like SCRUM. - Result-oriented with strong analytical and problem-solving skills. - Excellent verbal and written communication and presentation skills. Your primary responsibilities will include: - Developing and delivering high-quality software solutions using industry-aligned programming languages, frameworks, and tools, ensuring scalability, maintainability, and performance optimization. - Collaborating cross-functionally with product managers, designers, and engineers to define software requirements, devise solution strategies, and align with business objectives. - Promoting a culture of code quality and knowledge sharing through participation in code reviews and industry technology communities. - Ensuring secure coding practices to protect data and mitigate vulnerabilities, along with effective unit testing practices for proper code design and reliability. As a Big Data Engineer at Barclays, you will play a crucial role in designing, developing, and enhancing software to provide business, platform, and technology capabilities for customers and colleagues. You will contribute to technical excellence, continuous improvement, and risk mitigation while adhering to Barclays" values of Respect, Integrity, Service, Excellence, and Stewardship, and embodying the Barclays Mindset of Empower, Challenge, and Drive.,

Posted 3 weeks ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

At Allstate, great things happen when our people work together to protect families and their belongings from life's uncertainties. For over 90 years, our innovative drive has kept us a step ahead of our customers" evolving needs. From advocating for safety measures like seat belts and airbags to being a leader in pricing sophistication, telematics, and device and identity protection. This role is responsible for leading the use of data to make decisions. You will develop and execute new machine learning predictive modeling algorithms, code tools using machine learning/predictive modeling for business decisions, integrate new data to improve modeling results, and find solutions to business problems through machine learning/predictive modeling. In addition, you will manage projects of small to medium complexity. We are seeking a Data Scientist to apply machine learning and advanced analytics to solve complex business problems. The ideal candidate will possess technical expertise, business acumen, and a passion for solving high-impact problems. Your responsibilities will include developing machine learning models, integrating new data sources, and delivering solutions that enhance decision-making. You will collaborate with cross-functional teams to translate insights into action, from design to deployment. Key Responsibilities: - Design, build, and validate statistical and machine learning models for key business problems. - Perform data exploration and analysis to uncover insights and improve model performance. - Communicate findings to stakeholders, collaborate with teams to ensure solutions are adopted. - Stay updated on modeling techniques, tools, and technologies, integrating innovative approaches. - Lead data science initiatives from planning to delivery, ensuring measurable business impact. - Provide mentorship to junior team members and lead technical teams as required. Must-Have Skills: - 4 to 8 years of experience in applied data science, delivering business value through machine learning. - Proficiency in Python with experience in libraries like scikit-learn, pandas, NumPy, and TensorFlow or PyTorch. - Strong foundation in statistical analysis, regression modeling, classification techniques, and more. - Hands-on experience with building and deploying machine learning models in cloud environments. - Ability to translate complex business problems into structured data science problems. - Strong communication, stakeholder management, analytical, and problem-solving skills. - Proactive in identifying opportunities for data-driven decision-making. - Experience in Agile or Scrum-based project environments. Preferred Skills: - Experience with Large Language Models (LLMs) and transformer architectures. - Experience with production-grade ML platforms and orchestration tools for scaling models. Primary Skills: Business Case Analyses, Data Analytics, Predictive Analytics, Predictive Modeling, Waterfall Project Management. Shift Time: Shift B (India). Recruiter Info: Annapurna Jha, email: ajhat@allstate.com. About Allstate: The Allstate Corporation is a leading insurance provider in the US, with operations in multiple countries, including India. Allstate India is a strategic business services arm focusing on technology, innovation, and operational excellence. Learn more about Allstate India here.,

Posted 3 weeks ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

About The Role We are seeking an experienced Data Engineer with deep hands-on expertise in AWS, Azure Databricks, Snowflake, and modern data engineering practices to join our growing Data & AI Engineering team. The ideal candidate is a strategic thinker who can design scalable platforms, drive robust data solutions, and support high-impact AI/GenAI projects from the ground up. Key Responsibilities Working experience of 3 years in Data engineering. Design, build, and optimize scalable data pipelines using modern frameworks and orchestration tools. Develop and maintain ETL/ELT workflows using AWS, Azure Databricks, Airflow, and Azure Data Factory. Manage and model data in Snowflake to support advanced analytics and machine learning use cases. Collaborate with analytics, product, and engineering teams to align data solutions with business goals. Ensure high standards for data quality, governance, and pipeline performance. Mentor junior engineers and help lead a high-performing data and platform engineering team. Lead and support GenAI platform initiatives, including building reusable libraries, integrating vector databases, and developing LLM-based pipelines. Build components of agentic frameworks using Python, Spring AI, and deploy them on AWS EKS. Establish and manage CI/CD pipelines using Jenkins. Drive ML Ops and model deployment workflows to ensure reliable and scalable AI solution delivery. Required Qualifications Proven hands-on experience with Azure Databricks, Snowflake, Airflow, and Python. Strong proficiency in SQL, Spark, Spark Streaming, and modern data orchestration frameworks. Solid understanding of data modeling, ETL best practices, and performance optimization. Experience in cloud-native environments (AWS and/or Azure). Strong hands-on expertise in AWS EKS, CI/CD (Jenkins), and ML Ops/model deployment workflows. Ability to lead, mentor, and collaborate effectively across cross-functional teams. Preferred Qualifications Experience with Search Platforms such as Elasticsearch, SOLR, OpenSearch, or Vespa. Familiarity with Spring Boot microservices and EKS-based deployments. Background in Recommender Systems, with leadership roles in AI/ML projects. Expertise in GenAI platform engineering, including LLMs, RAG architecture, Vector Databases, and agentic design. Proficiency in Python, Java, Spring AI, and enterprise-grade software development. Ability to build platform-level solutions with a focus on reusability, runtime libraries, and scalability. What We Offer A unique opportunity to build and scale cutting-edge AI and data platforms that drive meaningful business outcomes. A collaborative, growth-oriented work culture with room for ownership and innovation. Competitive compensation and a comprehensive benefits package. Flexible hybrid/remote work model to support work-life balance. Work Location Chennai -Hybrid /Remote. (ref:hirist.tech)

Posted 3 weeks ago

Apply

5.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic data team. The ideal candidate will have deep expertise in Snowflake, dbt (Data Build Tool), and Python, with a strong understanding of data architecture, transformation pipelines, and data quality principles. You will be instrumental in building and maintaining scalable data pipelines and enabling data-driven decision-making across the organization. Key Responsibilities Design, develop, and maintain scalable and efficient ETL/ELT pipelines using dbt, Snowflake, and Python. Optimize data models and warehouse performance in Snowflake. Collaborate with data analysts, scientists, and business teams to understand data needs and deliver high-quality datasets. Ensure data quality, governance, and compliance across pipelines. Automate data workflows and monitor production jobs to ensure accuracy and reliability. Participate in architectural decisions and advocate for best practices in data engineering. Maintain documentation of data pipelines, transformations, and data models. Mentor junior engineers and contribute to team knowledge sharing. Required Skills & Qualifications 5+ years of professional experience in Data Engineering. Strong hands-on experience with Snowflake (data modeling, performance tuning, security features). Proven experience using dbt for data transformation and modeling. Proficiency in Python for data engineering tasks and scripting. Solid understanding of SQL and experience in building and maintaining complex queries. Experience with orchestration tools (e.g., Airflow, Prefect) is a plus. Familiarity with version control systems like Git. Strong problem-solving skills and attention to detail. Excellent communication and teamwork abilities. Preferred Qualifications Experience working with cloud platforms like AWS, Azure, or GCP. Knowledge of data lake architecture and real-time streaming technologies. Exposure to CI/CD pipelines for data deployment. Experience in agile development methodologies. (ref:hirist.tech)

Posted 3 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies