Home
Jobs

2514 Airflow Jobs - Page 16

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

About Beco Beco ( letsbeco.com ) is a fast-growing Mumbai-based consumer-goods company on a mission to replace everyday single-use plastics with planet-friendly, bamboo- and plant-based alternatives. From reusable kitchen towels to biodegradable garbage bags, we make sustainable living convenient, affordable and mainstream. Our founding story began with a Mumbai beach clean-up that opened our eyes to the decades-long life of a single plastic wrapper—sparking our commitment to “Be Eco” every day. Our mission: “To craft, support and drive positive change with sustainable & eco-friendly alternatives—one Beco product at a time.” Backed by marquee climate-focused VCs and now 50 + employees, we are scaling rapidly across India’s top marketplaces, retail chains and D2C channels. Why we’re hiring Sustainability at scale demands operational excellence. As volumes explode, we need data-driven, self-learning systems that eliminate manual grunt work, unlock efficiency and delight customers. You will be the first dedicated AI/ML Engineer at Beco—owning the end-to-end automation roadmap across Finance, Marketing, Operations, Supply Chain and Sales. Responsibilities Partner with functional leaders to translate business pain-points into AI/ML solutions and automation opportunities. Own the complete lifecycle: data discovery, cleaning, feature engineering, model selection, training, evaluation, deployment and monitoring. Build robust data pipelines (SQL/BigQuery, Spark) and APIs to integrate models with ERP, CRM and marketing automation stacks. Stand up CI/CD + MLOps (Docker, Kubernetes, Airflow, MLflow, Vertex AI/SageMaker) for repeatable training and one-click releases. Establish data-quality, drift-detection and responsible-AI practices (bias, transparency, privacy). Mentor analysts & engineers; evangelise a culture of experimentation and “fail-fast” learning—core to Beco’s GSD (“Get Sh#!t Done”) values. Must-have Qualifications 3 + years hands-on experience delivering ML, data-science or intelligent-automation projects in production. Proficiency in Python (pandas, scikit-learn, PyTorch/TensorFlow) and SQL; solid grasp of statistics, experimentation and feature engineering. Experience building and scaling ETL/data pipelines on cloud (GCP, AWS or Azure). Familiarity with modern Gen-AI & NLP stacks (OpenAI, Hugging Face, RAG, vector databases). Track record of collaborating with cross-functional stakeholders and shipping iteratively in an agile environment. Nice-to-haves Exposure to e-commerce or FMCG supply-chain data. Knowledge of finance workflows (Reconciliation, AR/AP, FP&A) or RevOps tooling (HubSpot, Salesforce). Experience with vision models (Detectron2, YOLO) and edge deployment. Contributions to open-source ML projects or published papers/blogs. What Success Looks Like After 1 Year 70 % reduction in manual reporting hours across finance and ops. Forecast accuracy > 85 % at SKU level, slashing stock-outs by 30 %. AI chatbot resolves 60 % of tickets end-to-end, with CSAT > 4.7/5. At least two new data-products launched that directly boost topline or margin. Life at Beco Purpose-driven team obsessed with measurable climate impact. Entrepreneurial, accountable, bold” culture—where winning minds precede outside victories. Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Maximize Your Impact with TP Welcome to TP, a global hub of innovation and empowerment, where we redefine the future. With a remarkable €10 billion annual revenue and a global team of 500,000 employees serving 170 countries in over 300 languages, we lead in intelligent, digital-first solutions. As a globally certified Great Place to Work in 72 countries, our culture thrives on diversity, equity, and inclusion. We value your unique perspective and believe that your talent is the missing piece that completes our vision for a brighter, digitally driven tomorrow. The Opportunity The AI Data Engineer designs, develops, and maintains robust data pipelines to support AI data services operations, ensuring smooth ingestion, transformation, and extraction of large, multilingual, and multimodal datasets. This role collaborates with cross-functional teams to optimize data workflows, implement quality checks, and deliver scalable solutions that underpin our analytics and AI/ML initiatives. The Responsibilities Create and manage ETL workflows using Python and relevant libraries (e.g., Pandas, NumPy) for high-volume data processing. Monitor and optimize data workflows to reduce latency, maximize throughput, and ensure high-quality data availability. Work with Platform Operations, QA, and Analytics teams to guarantee seamless data integration and consistent data accuracy. Implement validation processes and address anomalies or performance bottlenecks in real time. Develop REST API integrations and Python scripts to automate data exchanges with internal systems and BI dashboards. Maintain comprehensive technical documentation, data flow diagrams, and best-practice guidelines. The Qualifications Bachelor’s degree in Computer Science, Data Engineering, Information Technology, or a related field. Relevant coursework in Python programming, database management, or data integration techniques. 3–5 years of professional experience in data engineering, ETL development, or similar roles. Proven track record of building and maintaining scalable data pipelines. Experience working with SQL databases (e.g., MySQL, PostgreSQL) and NoSQL solutions (e.g., MongoDB). AWS Certified Data Analytics – Specialty, Google Cloud Professional Data Engineer, or similar certifications are a plus. Advanced Python proficiency with data libraries (Pandas, NumPy, etc.). Familiarity with ETL/orchestration tools (e.g., Apache Airflow). Understanding of REST APIs and integration frameworks. Experience with version control (Git) and continuous integration practices. Exposure to cloud-based data solutions (AWS, Azure, or GCP) is advantageous. Pre-Employment Screenings By TP policy, employment in this position will be contingent on your successful completion and passage of a comprehensive background check, including global sanctions and watch list screening. Important | Policy on Unsolicited Third-Party Candidate Submissions TP does not accept candidate submissions from unsolicited third parties, including recruiters or headhunters. Applications will not be considered, and no contractual association will be established through such submissions. Diversity, Equity & Inclusion At TP, we are committed to fostering a diverse, equitable, and inclusive workplace. We welcome individuals from all backgrounds and lifestyles and do not discriminate based on gender identity or expression, sexual orientation, race, religion, age, national origin, citizenship, disability, pregnancy status, veteran status, or other differences. Show more Show less

Posted 5 days ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

We are hiring for immediate joiners. This is a Remote mode job. Job Title: GCP Data Engineer (Google Cloud Platform) Experience : 4 + Years Location: Chennai (Hybrid) Responsibilities Google Cloud Platform - Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API 2+Years in GCP Services - Biq Query, Data Flow, Dataproc, DataPlex,DataFusion, Terraform, Tekton, Cloud SQL, Redis Memory, Airflow, Cloud Storage 2+ Years in Data Transfer Utilities 2+ Years in Git / any other version control tool 2+ Years in Confluent Kafka 1+ Years of Experience in API Development 2+ Years in Agile Framework 4+ years of strong experience in python, Pyspark development. 4+ years of shell scripting to develop the adhoc jobsfor data importing/exporting. Show more Show less

Posted 5 days ago

Apply

3.0 years

0 Lacs

Delhi, India

On-site

Linkedin logo

Job Title : GenAI / ML Engineer Function : Research & Development Location : Delhi/Bangalore (3 days in office) About the Company: Elucidata is a TechBio Company headquartered in San Francisco. Our mission is to make life sciences data AI-ready. Elucidata's Elucidata’s LLM-powered platform Polly, helps research teams wrangle, store, manage and analyze large volumes of biomedical data. We are at the forefront of driving GenAI in life sciences R&D across leading BioPharma companies like Pfizer, Janssen, NextGen Jane and many more. We were recognised as the 'Most Innovative Biotech Company, 2024', by Fast Company. We are a 120+ multi-disciplinary team of experts based across the US and India. In September 2022, we raised $16 million in our Series A round led by Eight Roads, F-Prime, and our existing investors Hyperplane and IvyCap. About the Role: We are looking for a GenAI / ML Engineer to join our R&D team and work on cutting-edge applications of LLMs in biomedical data processing . In this role, you'll help build and scale intelligent systems that can extract, summarize, and reason over biomedical knowledge from large bodies of unstructured text, including scientific publications, EHR/EMR reports, and more. You’ll work closely with data scientists, biomedical domain experts, and product managers to design and implement reliable GenAI-powered workflows — from rapid prototypes to production-ready solutions. This is a highly strategic role as we continue to invest in agentic AI systems and LLM-native infrastructure to power the next generation of biomedical applications. Key Responsibilities: Build and maintain LLM-powered pipelines for entity extraction, ontology normalization, Q&A, and knowledge graph creation using tools like LangChain, LangGraph, and CrewAI. Fine-tune and deploy open-source LLMs (e.g., LLaMA, Gemma, DeepSeek, Mistral) for biomedical applications. Define evaluation frameworks to assess accuracy, efficiency, hallucinations, and long-term performance; integrate human-in-the-loop feedback. Collaborate cross-functionally with data scientists, bioinformaticians, product teams, and curators to build impactful AI solutions. Stay current with the LLM ecosystem and drive adoption of cutting-edge tools, models, and methods. Qualifications : 2–3 years of experience as an ML engineer, data scientist, or data engineer working on NLP or information extraction. Strong Python programming skills and experience building production-ready codebases. Hands-on experience with LLM frameworks and tooling (e.g., LangChain, HuggingFace, OpenAI APIs, Transformers). Familiarity with one or more LLM families (e.g., LLaMA, Mistral, DeepSeek, Gemma) and prompt engineering best practices. Strong grasp of ML/DL fundamentals and experience with tools like PyTorch, or TensorFlow. Ability to communicate ideas clearly, iterate quickly, and thrive in a fast-paced, product-driven environment. Good to Have (Preferred but Not Mandatory) Experience working with biomedical or clinical text (e.g., PubMed, EHRs, trial data). Exposure to building autonomous agents using CrewAI or LangGraph. Understanding of knowledge graph construction and integration with LLMs. Experience with evaluation challenges unique to GenAI workflows (e.g., hallucination detection, grounding, traceability). Experience with fine-tuning, LoRA, PEFT, or using embeddings and vector stores for retrieval. Working knowledge of cloud platforms (AWS/GCP) and MLOps tools (MLflow, Airflow etc.). Contributions to open-source LLM or NLP tooling We are proud to be an equal-opportunity workplace and are an affirmative action employer. We are committed to equal employment opportunities regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity, or Veteran status. Show more Show less

Posted 5 days ago

Apply

130.0 years

6 - 9 Lacs

Hyderābād

On-site

GlassDoor logo

Job Description Our company is an innovative, global healthcare leader that is committed to improving health and well-being around the world with a diversified portfolio of prescription medicines, vaccines and animal health products. We continue to focus our research on conditions that affect millions of people around the world - diseases like Alzheimer's, diabetes and cancer - while expanding our strengths in areas like vaccines and biologics. Our ability to excel depends on the integrity, knowledge, imagination, skill, diversity and teamwork of an individual like you. To this end, we strive to create an environment of mutual respect, encouragement and teamwork. As part of our global team, you’ll have the opportunity to collaborate with talented and dedicated colleagues while developing and expanding your career. As a Digital Supply Chain Data Modeler/Engineer, you will work as a member of the Digital Manufacturing Division team supporting Enterprise Orchestration Platform. You will be responsible for identifying, assessing, and solving complex business problems related to manufacturing and supply chain. You will receive training to achieve this, and you’ll be amazed at the diversity of opportunities to develop your potential and grow professionally. You will collaborate with business stakeholders and determine analytical capabilities that will enable the creation of Insights-focused solutions that align to business needs and ensure that delivery of these solutions meet quality requirements. The Opportunity Based in Hyderabad, joining a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organization driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Centre helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Job Description As Data modeler lead, you will be responsible for following Deliver divisional analytics initiatives with primary focus on datamodeling for all analytics, advanced analytics and AI/ML uses cases e,g Self Services , Business Intelligence & Analytics, Data exploration, Data Wrangling etc. Host and lead requirement/process workshop to understand the requirements of datamodeling . Analysis of business requirements and work with architecture team to deliver & contribute to feasibility analysis, implementation plans and high-level estimates. Based on business process and analysis of data sources, deliver detailed ETL design with mapping of data model covering all areas of Data warehousing for all analytics use cases . Creation of data model & transformation mapping in modeling tool and deploy in databases including creation of schedule orchestration jobs . Deployment of Data modeling configuration to Target systems (SIT , UAT & Prod ) . Understanding of Product owership and management. Lead Data model as a product for focus areas of Digital supply chain domain. Creation of required SDLC documentation as per project requirements. Optimization/industrialization of existing database and data transformation solution Prepare and update Data modeling and Data warehousing best practices along with foundational platforms. Work very closely with foundational product teams, Business, vendors and technology support teams to build team to deliver business initiatives . Position Qualifications : Education Minimum Requirement: - B.S. or M.S. in IT, Engineering, Computer Science, or related field. Required Experience and Skills**: 5+ years of relevant work experience, with demonstrated expertise in Data modeling in DWH, Data Mesh or any analytics related implementation; experience in implementing end to end DWH solutions involving creating design of DWH and deploying the solution 3+ years of experience in creating logical & Physical data model in any modeling tool ( SAP Power designer, WhereScape etc ). Experience in creating data modeling standards, best practices and Implementation process. High Proficiency in Information Management, Data Analysis and Reporting Requirement Elicitation Experience working with extracting business rules to develop transformations, data lineage, and dimension data modeling Experience working with validating legacy and developed data model outputs Development experience using WhereScape and various similar ETL/Data Modeling tools Exposure to Qlik or similar BI dashboarding applications Has advanced knowledge of SQL and data transformation practices Has deep understanding of data modelling and preparation of optimal data structures Is able to communicate with business, data transformation team and reporting team Has knowledge of ETL methods, and a willingness to learn ETL technologies Can fluently communicate in English Experience in Redshift or similar databases using DDL, DML, Query optimization, Schema management, Security, etc Experience with Airflow or similar various orchestration tools Exposure to CI/CD tools Exposure to AWS modules such as S3, AWS Console, Glue, Spectrum, etc management Independently support business discussions, analyze, and develop/deliver code Preferred Experience and Skills: Experience working on projects where Agile methodology is leveraged Understanding of data management best practices and data analytics Ability to lead requirements sessions with clients and project teams Strong leadership, verbal and written communication skills with ability to articulate results and issues to internal and client teams Demonstrated experience in the Life Science space Exposure to SAP and Rapid Response domain data is a plus Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Agile Data Warehousing, Agile Methodology, Animal Vaccination, Business, Business Communications, Business Initiatives, Business Intelligence (BI), Computer Science, Database Administration, Data Engineering, Data Management, Data Modeling, Data Visualization, Data Warehousing (DW), Design Applications, Digital Supply Chain, Digital Supply Chain Management, Digital Transformation, Information Management, Information Technology Operations, Software Development, Software Development Life Cycle (SDLC), Supply Chain Optimization, Supply Management, System Designs Preferred Skills: Job Posting End Date: 07/31/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R352794

Posted 5 days ago

Apply

2.0 - 4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

MongoDB’s mission is to empower innovators to create, transform, and disrupt industries by unleashing the power of software and data. We enable organizations of all sizes to easily build, scale, and run modern applications by helping them modernize legacy workloads, embrace innovation, and unleash AI. Our industry-leading developer data platform, MongoDB Atlas, is the only globally distributed, multi-cloud database and is available in more than 115 regions across AWS, Google Cloud, and Microsoft Azure. Atlas allows customers to build and run applications anywhere—on premises, or across cloud providers. With offices worldwide and over 175,000 new developers signing up to use MongoDB every month, it’s no wonder that leading organizations, like Samsung and Toyota, trust MongoDB to build next-generation, AI-powered applications. Summary As an Analytics Engineer at MongoDB, you will play a critical role in leveraging data to drive informed decision-making and simplify end user engagement across our most critical data sets. You will be responsible for designing, developing, and maintaining robust analytics solutions, ensuring data integrity, and enabling data-driven insights across all of MongoDB. This role requires an analytical thinker with strong technical expertise to contribute to the growth and success of the entire business. We are looking to speak to candidates who are based in Gurugram for our hybrid working model. Responsibilities Design, implement, and maintain highly performant data post-processing pipelines Create shared data assets that will act as the company’s source-of-truth for critical business metrics Partner with analytics stakeholders to curate analysis-ready datasets and augment the generation of actionable insights Partner with data engineering to expose governed datasets to the rest of the organization Make impactful contributions to our analytics infrastructure, systems, and tools Create and manage documentation, and conduct knowledge sharing sessions to proliferate tribal knowledge and best practices Maintain consistent planning and tracking of work in JIRA tickets Skills & Attributes Bachelor’s degree (or equivalent) in mathematics, computer science, information technology, engineering, or related discipline 2-4 years of relevant experience Strong Proficiency in SQL and experience working with relational databases Solid understanding of data modeling and ETL processes Proficiency in Python for data manipulation and analysis Familiarity with CI/CD concepts and experience with managing codebases with git Experience managing ETL and data pipeline orchestration with dbt and Airflow Familiarity with basic command line functions Experience translating project requirements into a set of technical sub-tasks that build towards a final deliverable Committed to continuous improvement, with a passion for building processes/tools to make everyone more efficient The ability to effectively collaborate cross-functionally to drive actionable and measurable results A passion for AI as an enhancing tool to improve workflows, increase productivity, and generate smarter outcomes Strong communication skills to document technical processes clearly and lead knowledge-sharing efforts across teams A desire to constantly learn and improve themselves To drive the personal growth and business impact of our employees, we’re committed to developing a supportive and enriching culture for everyone. From employee affinity groups, to fertility assistance and a generous parental leave policy, we value our employees’ wellbeing and want to support them along every step of their professional and personal journeys. Learn more about what it’s like to work at MongoDB, and help us make an impact on the world! MongoDB is committed to providing any necessary accommodations for individuals with disabilities within our application and interview process. To request an accommodation due to a disability, please inform your recruiter. MongoDB is an equal opportunities employer. Req ID - 2263168254 Show more Show less

Posted 5 days ago

Apply

6.0 years

0 Lacs

India

On-site

Linkedin logo

JOB DESCRIPTION Key Responsibilities:  Prior experience in migrating from IBM DataStage to DBT and BigQuery or similar data migration activities into the cloud solutions.  Design and implement modular, testable, and scalable DBT models aligned with business logic and performance needs.  Optimize and manage BigQuery datasets, partitioning, clustering, and cost-efficient querying.  Collaborate with stakeholders to understand existing pipelines and translate them into modern ELT workflows.  Establish best practices for version control, CI/CD, testing, and documentation in DBT.  Provide technical leadership and mentorship to team members during the migration process.  Ensure high standards of data quality, governance, and security. Required Qualifications:  6+ years of experience in data engineering, with at least 3+ years hands-on with DBT and BigQuery.  Strong understanding of SQL, data warehousing, and ELT architecture.  Experience with data modeling (especially dimensional modeling) and performance tuning in BigQuery.  Familiarity with legacy ETL tools like IBM DataStage and ability to reverse-engineer existing pipelines.  Proficiency in Git, CI/CD pipelines, and dataOps practices.  Excellent communication skills and ability to work independently and collaboratively. Preferred Qualifications:  Experience in cloud migration projects (especially GCP).  Knowledge of data governance, access control, and cost optimization in BigQuery.  Exposure to orchestration tools like Airflow.  Familiarity with Agile methodologies and cross-functional team collaboration. Show more Show less

Posted 5 days ago

Apply

2.0 - 3.0 years

0 - 0 Lacs

Cochin

On-site

GlassDoor logo

About Us: G3 Interactive is a Kochi-based software development company now expanding into AI, data engineering, and advanced analytics due to growing customer demand. We’re seeking a Data Engineer / Data Scientist with a strong foundation in data processing and analytics — and experience with Databricks is a significant advantage. What You’ll Do: Build and manage scalable data pipelines and ELT workflows using modern tools. Design and implement predictive models using Python (pandas, scikit-learn) . Leverage platforms like Databricks for distributed processing and ML workflows. Collaborate with internal and client teams to deliver data-driven solutions . Create insightful dashboards using Metabase or other BI tools. Maintain high-quality standards in data management and governance. Must-Have Skills: 2–3 years’ experience in Data Science, Data Engineering, or a similar role. Strong Python skills with expertise in pandas and data structures. Experience writing efficient SQL queries. Understanding of ML models, statistical analysis, and data wrangling. Excellent communication and collaboration skills. Bonus / Advantage: Experience with Databricks (ETL, notebooks, ML pipelines, Delta Lake, etc.). Familiarity with Spark, Airflow, AWS, or Google Cloud. Knowledge of CI/CD for data workflows. Job Types: Full-time, Permanent Pay: ₹25,000.00 - ₹35,000.00 per month Schedule: Day shift Education: Master's (Preferred) Experience: data engineering: 2 years (Preferred) Work Location: In person

Posted 5 days ago

Apply

5.0 years

8 - 9 Lacs

Gurgaon

On-site

GlassDoor logo

You Lead the Way. We’ve Got Your Back. At American Express, we know that with the right backing, people and businesses have the power to progress in incredible ways. Whether we’re supporting our customers’ financial confidence to move ahead, taking commerce to new heights, or encouraging people to explore the world, our colleagues are constantly redefining what’s possible — and we’re proud to back each other every step of the way. When you join #TeamAmex, you become part of a diverse community of over 60,000 colleagues, all with a common goal to deliver an exceptional customer experience every day. We back our colleagues with the support they need to thrive, professionally and personally. That’s why we have Amex Flex, our enterprise working model that provides greater flexibility to colleagues while ensuring we preserve the important aspects of our unique in-person culture. We are building an energetic, high-performance team with a nimble and creative mindset to drive our technology and products. American Express (AXP) is a powerful brand, a great place to work and has unparalleled scale. Join us for an exciting opportunity in the Marketing Technology within American Express Technologies. How will you make an impact in this role? There are hundreds of opportunities to make your mark on technology and life at American Express. Here's just some of what you'll be doing: As a part of our team, you will be developing innovative, high quality, and robust operational engineering capabilities. Develop software in our technology stack which is constantly evolving but currently includes Big data, Spark, Python, Scala, GCP, Adobe Suit ( like Customer Journey Analytics ). Work with Business partners and stakeholders to understand functional requirements, architecture dependencies, and business capability roadmaps. Create technical solution designs to meet business requirements. Define best practices to be followed by team. Taking your place as a core member of an Agile team driving the latest development practices Identify and drive reengineering opportunities, and opportunities for adopting new technologies and methods. Suggest and recommend solution architecture to resolve business problems. Perform peer code review and participate in technical discussions with the team on the best solutions possible. As part of our diverse tech team, you can architect, code and ship software that makes us an essential part of our customers' digital lives. Here, you can work alongside talented engineers in an open, supportive, inclusive environment where your voice is valued, and you make your own decisions on what tech to use to solve challenging problems. American Express offers a range of opportunities to work with the latest technologies and encourages you to back the broader engineering community through open source. And because we understand the importance of keeping your skills fresh and relevant, we give you dedicated time to invest in your professional development. Find your place in technology of #TeamAmex. Minimum Qualifications: BS or MS degree in computer science, computer engineering, or other technical discipline, or equivalent work experience. 5+ years of hands-on software development experience with Big Data & Analytics solutions – Hadoop Hive, Spark, Scala, Hive, Python, shell scripting, GCP Cloud Big query, Big Table, Airflow. Working knowledge of Adobe suit like Adobe Experience Platform, Adobe Customer Journey Analytics, CDP. Proficiency in SQL and database systems, with experience in designing and optimizing data models for performance and scalability. Design and development experience with Kafka, Real time ETL pipeline, API is desirable. Experience in designing, developing, and optimizing data pipelines for large-scale data processing, transformation, and analysis using Big Data and GCP technologies. Certifications in cloud platform (GCP Professional Data Engineer) is a plus. Understanding of distributed (multi-tiered) systems, data structures, algorithms & Design Patterns. Strong Object-Oriented Programming skills and design patterns. Experience with CICD pipelines, Automated test frameworks, and source code management tools (XLR, Jenkins, Git, Maven). Good knowledge and experience with configuration management tools like GitHub Ability to analyze complex data engineering problems, propose effective solutions, and implement them effectively. Looks proactively beyond the obvious for continuous improvement opportunities. Communicates effectively with product and cross functional team. We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations.

Posted 5 days ago

Apply

3.0 years

0 Lacs

Chennai

On-site

GlassDoor logo

Do you want to work on complex and pressing challenges—the kind that bring together curious, ambitious, and determined leaders who strive to become better every day? If this sounds like you, you’ve come to the right place. Your Impact As a Data Engineer I at McKinsey & Company, you will play a key role in designing, building, and deploying scalable data pipelines and infrastructure that enable our analytics and AI solutions. You will work closely with product managers, developers, asset owners, and client stakeholders to turn raw data into trusted, structured, and high-quality datasets used in decision-making and advanced analytics. Your core responsibilities will include: Developing robust, scalable data pipelines for ingesting, transforming, and storing data from multiple structured and unstructured sources using Python/SQL. Creating and optimizing data models and data warehouses to support reporting, analytics, and application integration. Working with cloud-based data platforms (AWS, Azure, or GCP) to build modern, efficient, and secure data solutions. Contributing to R&D projects and internal asset development. Contributing to infrastructure automation and deployment pipelines using containerization and CI/CD tools. Collaborating across disciplines to integrate data engineering best practices into broader analytical and generative AI (gen AI) workflows. Supporting and maintaining data assets deployed in client environments with a focus on reliability, scalability, and performance. Furthermore, you will have opportunity to explore and contribute to solutions involving generative AI, such as vector embeddings, retrieval-augmented generation (RAG), semantic search, and LLM-based prompting, especially as we integrate gen AI capabilities into our broader data ecosystem. Your Growth Driving lasting impact and building long-term capabilities with our clients is not easy work. You are the kind of person who thrives in a high performance/high reward culture - doing hard things, picking yourself up when you stumble, and having the resilience to try another way forward. In return for your drive, determination, and curiosity, we'll provide the resources, mentorship, and opportunities you need to become a stronger leader faster than you ever thought possible. Your colleagues—at all levels—will invest deeply in your development, just as much as they invest in delivering exceptional results for clients. Every day, you'll receive apprenticeship, coaching, and exposure that will accelerate your growth in ways you won’t find anywhere else. When you join us, you will have: Continuous learning: Our learning and apprenticeship culture, backed by structured programs, is all about helping you grow while creating an environment where feedback is clear, actionable, and focused on your development. The real magic happens when you take the input from others to heart and embrace the fast-paced learning experience, owning your journey. A voice that matters: From day one, we value your ideas and contributions. You’ll make a tangible impact by offering innovative ideas and practical solutions. We not only encourage diverse perspectives, but they are critical in driving us toward the best possible outcomes. Global community: With colleagues across 65+ countries and over 100 different nationalities, our firm’s diversity fuels creativity and helps us come up with the best solutions for our clients. Plus, you’ll have the opportunity to learn from exceptional colleagues with diverse backgrounds and experiences. World-class benefits: On top of a competitive salary (based on your location, experience, and skills), we provide a comprehensive benefits package, which includes medical, dental, mental health, and vision coverage for you, your spouse/partner, and children. Your qualifications and skills Bachelor’s degree in computer science, engineering, mathematics, or a related technical field (or equivalent practical experience). 3+ years of experience in data engineering, analytics engineering, or a related technical role. Strong Python programming skills with demonstrated experience building scalable data workflows and ETL/ELT pipelines. Proficient in SQL with experience designing normalized and denormalized data models. Hands-on experience with orchestration tools such as Airflow, Kedro, or Azure Data Factory (ADF). Familiarity with cloud platforms (AWS, Azure, or GCP) for building and managing data infrastructure. Discernable communication skills, especially around breaking down complex structures into digestible and relevant points for a diverse set of clients and colleagues, at all levels. High-value personal qualities including critical thinking and creative problem-solving skills; an ability to influence and work in teams. Entrepreneurial mindset and ownership mentality are must; desire to learn and develop, within a dynamic, self-led organization. Hands-on experience with containerization technologies (Docker, Docker-compose). Hands on experience with automation frameworks (Github Actions, CircleCI, Jenkins, etc.). Exposure to generative AI tools or concepts (e.g., OpenAI, Cohere, embeddings, vector databases). Experience working in Agile teams and contributing to design and architecture discussions. Contributions to open-source projects or active participation in data engineering communities.

Posted 5 days ago

Apply

2.0 - 5.0 years

0 - 0 Lacs

Sānand

On-site

GlassDoor logo

HR Contact No. 6395012950 Job Title: Design Engineer – HVAC Manufacturing Location: Gujarat Department: Engineering/Design Reports To: MD Job Type: Full-Time Position Overview: We are seeking a talented and detail-oriented Design Engineer to join our engineering team in a dynamic HVAC manufacturing environment. The ideal candidate will have a strong background in mechanical design, proficiency in AutoCAD , and hands-on experience with nesting software for sheet metal fabrication. This role is critical to the development and production of high-quality HVAC components and systems, supporting product design, customization, and manufacturing optimization. Key Responsibilities: Design HVAC components and assemblies using AutoCAD/Nesting based on project specifications. Create and manage detailed 2D and 3D drawings, BOMs, and technical documentation. Prepare nesting layouts using nesting software for sheet metal cutting operations. Collaborate with production and fabrication teams to ensure manufacturability and cost-efficiency of designs. Modify and improve existing designs to meet performance and production requirements. Work with the Customers and Sales team to develop a quotable/manufacturing solution to the customer request. Ensure timely output drawings for customer approval Participate in new product development and R&D initiatives. Visiting Project sites as per requirement. Ensure all designs comply with industry standards and company quality procedures. Assist in resolving manufacturing and assembly issues related to design. Required Qualifications: Diploma or Bachelor's Degree in Mechanical Engineering, Manufacturing Engineering, or related field. Minimum of 2–5 years of experience in a design engineering role within a manufacturing environment, preferably HVAC. Proficiency in AutoCAD (2D required, 3D is a plus). Hands-on experience with nesting software (e.g., SigmaNEST, NestFab, or similar). Solid understanding of sheet metal fabrication processes and design principles. Strong analytical, problem-solving, and communication skills. Ability to interpret technical drawings and specifications. Experience working in a cross-functional team environment. Preferred Qualifications: Familiarity with HVAC system components and airflow principles. Experience with additional CAD/CAM software (e.g., SolidWorks, Inventor). Knowledge of lean manufacturing or value engineering practices. Job Type: Full-time Pay: ₹15,000.00 - ₹20,000.00 per month Benefits: Paid time off Provident Fund Schedule: Day shift Supplemental Pay: Yearly bonus Work Location: In person

Posted 5 days ago

Apply

5.0 years

7 - 15 Lacs

Ahmedabad

On-site

GlassDoor logo

We are accepting applications for experienced Data Engineer with a strong background in data scraping, cleaning, transformation, and automation. The ideal candidate will be responsible for building robust data pipelines, maintaining data integrity, and generating actionable dashboards and reports to support business decision-making. Key Responsibilities: Develop and maintain scripts for scraping data from various sources including APIs, websites, and databases. Perform data cleaning, transformation, and normalization to ensure consistency and usability across all data sets. Design and implement relational and non-relational data tables and frames for scalable data storage and analysis. Build automated data pipelines to ensure timely and accurate data availability. Create and manage interactive dashboards and reports using tools such as Power BI, Tableau, or similar platforms. Write and maintain data automation scripts to streamline ETL (Extract, Transform, Load) processes. Ensure data quality, governance, and compliance with internal and external regulations. Monitor and optimize the performance of data workflows and pipelines. Qualifications & Skills: Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related field. Minimum of 5 years of experience in a data engineering or similar role. Proficient in Python (especially for data scraping and automation), and strong hands-on experience with Pandas, NumPy , and other data manipulation libraries. Experience with web scraping tools and techniques (e.g., BeautifulSoup, Scrapy, Selenium). Strong SQL skills and experience working with relational databases (e.g., PostgreSQL, MySQL) and data warehouses (e.g., Redshift, Snowflake, BigQuery). Familiarity with data visualization tools like Power BI, Tableau, or Looker. Knowledge of ETL tools and orchestration frameworks such as Apache Airflow, Luigi, or Prefect . Experience with version control systems like Git and collaborative platforms like Jira or Confluence . Strong understanding of data security, privacy , and governance best practices. Excellent problem-solving skills and attention to detail. Preferred Qualifications: Experience with cloud platforms such as AWS, GCP, or Azure. Familiarity with NoSQL databases like MongoDB, Cassandra, or Elasticsearch. Understanding of CI/CD pipelines and DevOps practices related to data engineering. Job Type: Full-Time (In-Office) Work Days: Monday to Saturday Job Types: Full-time, Permanent Pay: ₹700,000.00 - ₹1,500,000.00 per year Schedule: Day shift Work Location: In person

Posted 5 days ago

Apply

3.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

About The Role We are seeking a highly skilled Data Engineer with deep expertise in PySpark and the Cloudera Data Platform (CDP) to join our data engineering team. As a Data Engineer, you will be responsible for designing, developing, and maintaining scalable data pipelines that ensure high data quality and availability across the organization. This role requires a strong background in big data ecosystems, cloud-native tools, and advanced data processing techniques. The ideal candidate has hands-on experience with data ingestion, transformation, and optimization on the Cloudera Data Platform, along with a proven track record of implementing data engineering best practices. You will work closely with other data engineers to build solutions that drive impactful business insights. Responsibilities Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Education and Experience Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Technical Skills PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Show more Show less

Posted 5 days ago

Apply

3.0 years

0 Lacs

Bangalore Urban, Karnataka, India

On-site

Linkedin logo

Role: Data Engineer Key Skill: Pyspark, Cloudera Data Platfrorm, Big data - Hadoop, Hive, Kafka Responsibilities Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Technical Skills 3+ years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Show more Show less

Posted 5 days ago

Apply

0 years

7 - 9 Lacs

Noida

On-site

GlassDoor logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description & Summary: A career within…. A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities: Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark, Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory skill sets: Spark, Pyspark, Azure Preferred skill sets: Spark, Pyspark, Azure Years of experience required: 4 - 8 Education qualification: B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 5 days ago

Apply

0 years

0 Lacs

Noida

On-site

GlassDoor logo

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Manager Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. *Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us . At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Job Description & Summary: A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities : Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and cost-effectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark , Scala, and Apache Spark. Hands-on experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (e.g., SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (e.g., Hadoop, Hive, HBase) is a plus. Mandatory skill set s: Spark, Pyspark , Azure Preferred skill sets : Spark, Pyspark , Azure Years of experience required : 8 - 12 Education qualification : B.Tech / M.Tech / MBA / MCA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Master of Engineering, Master of Business Administration Degrees/Field of Study preferred: Certifications (if blank, certifications not specified) Required Skills Data Science Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Coaching and Feedback, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling {+ 32 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 5 days ago

Apply

3.0 years

0 Lacs

Jaipur

On-site

GlassDoor logo

Data Engineer Role Category: Programming & Design Job Location: Jaipur, Rajasthan on-site Experience Required: 3–6 Years About the Role We are looking for a highly skilled and motivated Data Engineer to join our team. You will be responsible for designing, building, and maintaining scalable data pipelines and infrastructure that supports analytics, machine learning, and business intelligence initiatives across the company. Key Responsibilities Design, develop, and maintain robust ETL/ELT pipelines to ingest and process data from multiple sources. Build and maintain scalable and reliable data warehouses, data lakes, and data marts. Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver solutions. Ensure data quality, integrity, and security across all data systems. Optimize data pipeline performance and troubleshoot issues in a timely manner. Implement data governance and best practices in data management. Automate data validation, monitoring, and reporting processes. Required Skills and Qualifications Bachelor's or Master’s degree in Computer Science, Engineering, Information Systems, or related field. Proven experience (X+ years) as a Data Engineer or similar role. Strong programming skills in Python, Java, or Scala. Proficiency with SQL and working knowledge of relational databases (e.g., PostgreSQL, MySQL). Hands-on experience with big data technologies (e.g., Spark, Hadoop). Familiarity with cloud platforms such as AWS, GCP, or Azure (e.g., S3, Redshift, BigQuery, Data Factory). Experience with orchestration tools like Airflow or Prefect. Knowledge of data modeling, warehousing, and architecture design principles. Strong problem-solving skills and attention to detail.

Posted 5 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing... Designing and Implementing ML Model pipelines (Batch and real-time) for efficient model training and serving/inference. Implementing and analyzing the performance of advanced algorithms (Specifically Deep Learning based ML Models). Solving the model inferencing failures/fallouts. Optimizing existing machine-learning Model Pipelines to ensure the training/inferencing is within the standard duration Collaborating effectively with cross-functional teams to understand business needs and deliver impactful solutions. Contributing to developing robust and scalable distributed computing systems for large-scale data processing. Designing, developing, and implementing innovative AI/ML solutions using Python, CI/CD, public cloud platforms Implementing model performance metrics pipeline for predictive models, covering different types of algorithms to adhere to Responsible AI. What We’re Looking For... You’ll need to have: Bachelor's degree or four or more years of work experience. Four or more years of relevant work experience. Experience in Batch Model Inferencing, Model-serving in Realtime. Knowledge on Frameworks such as BentoML TensorFlow Serving (TFX) or Triton. Solid expertise on GCP Cloud ML techstacks such as Bigquery, Data Proc, Airflow, Cloud Functions, Spanner, Data Flow. Very good experience on languages such as Python and PySpark. Expertise on Distributed computation and Multi-node distributed model training. Good understanding on GPU usage management. Experience on RAY Core and RAY Serve (batch and real-time models). Experience in CI/CD practices. Even better if you have one or more of the following: GCP Certifications or any Cloud Certification on AI/ML or Data w If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 5 days ago

Apply

2.0 - 7.0 years

4 - 8 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

Job Summary: We are looking for a highly capable and automation-driven MLOps Engineer with 2+ years of experience in building and managing end-to-end ML infrastructure. This role focuses on operationalizing ML pipelines using tools like DVC, MLflow, Kubeflow, and Airflow, while ensuring efficient deployment, versioning, and monitoring of machine learning and Generative AI models across GPU-based cloud infrastructure (AWS/GCP). The ideal candidate will also have experience in multi-modal orchestration, model drift detection, and CI/CD for ML systems. Key Responsibilities: Develop, automate, and maintain scalable ML pipelines using tools such as Kubeflow, MLflow, Airflow, and DVC. Set up and manage CI/CD pipelines tailored to ML workflows, ensuring reliable model training, testing, and deployment. Containerize ML services using Docker and orchestrate them using Kubernetes in both development and production environments. Manage GPU infrastructure and cloud-based deployments (AWS, GCP) for high-performance training and inference. Integrate Hugging Face models and multi-modal AI systems into robust deployment frameworks. Monitor deployed models for drift, performance degradation, and inference bottlenecks, enabling continuous feedback and retraining. Ensure proper model versioning, lineage, and reproducibility for audit and compliance. Collaborate with data scientists, ML engineers, and DevOps teams to build reliable and efficient MLOps systems. Support Generative AI model deployment with scalable architecture and automation-first practices. Qualifications: 2+ years of experience in MLOps, DevOps for ML, or Machine Learning Engineering. Hands-on experience with MLflow, DVC, Kubeflow, Airflow, and CI/CD tools for ML. Proficiency in containerization and orchestration using Docker and Kubernetes. Experience with GPU infrastructure, including setup, scaling, and cost optimization on AWS or GCP. Familiarity with model monitoring, drift detection, and production-grade deployment pipelines. Good understanding of model lifecycle management, reproducibility, and compliance. Preferred Qualifications : Experience deploying Generative AI or multi-modal models in production. Knowledge of Hugging Face Transformers, model quantization, and resource-efficient inference. Familiarity with MLOps frameworks and observability stacks. Experience with security, governance, and compliance in ML environments. Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 5 days ago

Apply

0.0 - 2.0 years

0 Lacs

India

On-site

Linkedin logo

We’re Hiring: Data Engineer About The Job Duration: 12 Months Location: PAN INDIA Timings: Full Time (As per company timings) Notice Period: within 15 days or immediate joiner Experience: 0- 2 years Responsibilities Job Description Design, develop and maintain reliable automated data solutions based on the identification, collection and evaluation of business requirements. Including but not limited to data models, database objects, stored procedures and views. Developing new and enhancing existing data processing (Data Ingest, Data Transformation, Data Store, Data Management, Data Quality) components Support and troubleshoot the data environment (including periodically on call) Document technical artifacts for developed solutions Good interpersonal skills; comfort and competence in dealing with different teams within the organization. Requires an ability to interface with multiple constituent groups and build sustainable relationships Versatile, creative temperament, ability to think out-of-the box while defining sound and practical solutions. Ability to master new skills Proactive approach to problem solving with effective influencing skills Familiar with Agile practices and methodologies Education And Experience Requirements Four-year degree in Information Systems, Finance / Mathematics, Computer Science or similar 0-2 years of experience in Data Engineering REQUIRED KNOWLEDGE, SKILLS Or ABILITIES Advanced SQL queries, scripts, stored procedures, materialized views, and views Focus on ELT to load data into database and perform transformations in database Ability to use analytical SQL functions Snowflake experience a plus Cloud Data Warehouse solutions experience (Snowflake, Azure DW, or Redshift); data modelling, analysis, programming Experience with DevOps models utilizing a CI/CD tool Work in hands-on Cloud environment in Azure Cloud Platform (ADLS, Blob) Talend, Apache Airflow, Azure Data Factory, and BI tools like Tableau preferred Analyse data models We are looking for a Senior Data Engineer for the Enterprise Data Organization to build and manage data pipeline (Data ingest, data transformation, data distribution, quality rules, data storage etc.) for Azure cloud-based data platform. The candidate will require to possess strong technical, analytical, programming and critical thinking skills. Show more Show less

Posted 5 days ago

Apply

1.0 - 3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company: Qualcomm India Private Limited Job Area: Engineering Group, Engineering Group > Mechanical Engineering General Summary: As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Mechanical Engineer, you will design, analyze, troubleshoot, and test electro-mechanical systems and packaging. Qualcomm Engineers collaborate across functions to provide design information and complete project deliverables. Minimum Qualifications: Bachelor's degree in Mechanical Engineering or related field. Job Overview: The successful candidate will operate as a member of Corporate Engineering Hyderabad department. Responsibilities include working with US and India teams to perform thermal and structural analysis of high-performance electronic assemblies. Specific tasks include daily use of thermal and structural analysis software, taking concept layouts from the design team, creating representative analytical models, defining boundary and loading conditions, running simulations, analyzing results, and making recommendations for optimization of designs. The Engineer will interface with internal staff and outside partners in the fast-paced execution of a variety of multi-disciplined projects. Minimum Qualifications : Bachelor's / Master’s degree in Mechanical/Thermal/ Electronic Engineering or a related field. 1-3 years actively involved in thermal and structural engineering analysis of high-density electronics packaging. Strong background in heat transfer fundamentals with a good understanding of electronics cooling technologies (passive and active). Knowledge of packaging technologies, electromechanical design, and thermal management materials. Analysis tools experience utilizing Flotherm,XT, Icepak, 6SigmaET, Celsius EC, Ansys, Abaqus, or equiv. Solid modeling experience utilizing Pro/E or Solidworks mechanical CAD system. Proven ability to work independently and collaboratively within a cross-functional team environment. Strong technical documentation skills and excellent written and verbal communication. Preferred Qualifications: Expected to possess a strong understanding of mechanical engineering and analysis fundamentals. Experience creating thermal and structural models of electronic circuit components, boards, and enclosures. Experience applying environmental spec conditions to analytical model boundary and loading conditions. Experience working with HW team for component, board, and system thermal power estimates. Experience specifying appropriate fans and heat sinks for electronic assemblies. Experience working with design teams for optimization based on analysis results. Demonstrated success in working with HW teams for appropriate thermal mitigation techniques. Proficiency with thermal testing (e.g.,LabVIEW, thermocouples, airflow measurements, thermal chambers, J-TAG) for computer hardware. Understands project goals and individual contribution toward those goals. Effectively communicates with project peers and engineering personnel via e-mail, web meetings, and instant messaging including status reports and illustrative presentation slides. Excellent verbal and written communication skills Interact and collaborate with other internal mechanical and electronics engineers for optimal product development processes and schedule execution. Effectively multitasks and meets aggressive schedules in a dynamic environment. Prepare and deliver design reviews to project team. Education Requirements: Bachelor's / Master’s degree in Mechanical/Thermal /Electronic Engineering or a related field. Keywords: Thermal analysis, electronics cooling, Flotherm, XT, Icepak, Celsius EC, thermal testing, thermal engineering, mechanical engineering. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers. 3075823 Show more Show less

Posted 5 days ago

Apply

5.0 years

0 Lacs

Jaipur, Rajasthan, India

On-site

Linkedin logo

About Hakkoda Hakkoda, an IBM Company, is a modern data consultancy that empowers data driven organizations to realize the full value of the Snowflake Data Cloud. We provide consulting and managed services in data architecture, data engineering, analytics and data science. We are renowned for bringing our clients deep expertise, being easy to work with, and being an amazing place to work! We are looking for curious and creative individuals who want to be part of a fast-paced, dynamic environment, where everyone’s input and efforts are valued. We hire outstanding individuals and give them the opportunity to thrive in a collaborative atmosphere that values learning, growth, and hard work. Our team is distributed across North America, Latin America, India and Europe. If you have the desire to be a part of an exciting, challenging, and rapidly-growing Snowflake consulting services company, and if you are passionate about making a difference in this world, we would love to talk to you!. We are seeking a skilled and collaborative Sr. Data/Python Engineer with experience in the development of production Python-based applications (Such as Django, Flask, FastAPI on AWS) to support our data platform initiatives and application development. This role will initially focus on building and optimizing Streamlit application development frameworks, CI/CD Pipelines, ensuring code reliability through automated testing with Pytest , and enabling team members to deliver updates via CI/CD pipelines . Once the deployment framework is implemented, the Sr Engineer will own and drive data transformation pipelines in dbt and implement a data quality framework. Key Responsibilities Lead application testing and productionalization of applications built on top of Snowflake - This includes implementation and execution of unit testing and integration testing - Automated test suites include use of Pytest and Streamlit App Tests to ensure code quality, data accuracy, and system reliability. Development and Integration of CI/CD pipelines (e.g., GitHub Actions, Azure DevOps, or GitLab CI) for consistent deployments across dev, staging, and production environments. Development and testing of AWS-based pipelines - AWS Glue, Airflow (MWAA), S3. Design, develop, and optimize data models and transformation pipelines in Snowflake using SQL and Python. Build Streamlit-based applications to enable internal stakeholders to explore and interact with data and models. Collaborate with team members and application developers to align requirements and ensure secure, scalable solutions. Monitor data pipelines and application performance, optimizing for speed, cost, and user experience. Create end-user technical documentation and contribute to knowledge sharing across engineering and analytics teams. Work in CST hours and collaborate with onshore and offshore teams. Qualifications, Skills & Experience 5+ years of experience in Data Engineering or Python based application development on AWS (Flask, Django, FastAPI, Streamlit) - Experience building data data-intensive applications on python as well as data pipelines on AWS in a must. Bachelor’s degree in computer science, Information Systems, Data Engineering, or a related field (or equivalent experience). Proficient in SQL and Python for data manipulation and automation tasks. Experience with developing and productionalizing applications built on Python based Frameworks such as FastAPI, Django, Flask. Experience with application frameworks such as Streamlit, Angular, React etc for rapid data app deployment. Solid understanding of software testing principles and experience using Pytest or similar Python frameworks. Experience configuring and maintaining CI/CD pipelines for automated testing and deployment. Familiarity with version control systems such as Gitlab. Knowledge of data governance, security best practices, and role-based access control (RBAC) in Snowflake. Preferred Qualifications Experience with dbt (data build tool) for transformation modeling. Knowledge of Snowflake’s advanced features (e.g., masking policies, external functions, Snowpark). Exposure to cloud platforms (e.g., AWS, Azure, GCP). Strong communication and documentation skills. Benefits Health Insurance Paid leave Technical training and certifications Robust learning and development opportunities Incentive Toastmasters Food Program Fitness Program Referral Bonus Program Hakkoda is committed to fostering diversity, equity, and inclusion within our teams. A diverse workforce enhances our ability to serve clients and enriches our culture. We encourage candidates of all races, genders, sexual orientations, abilities, and experiences to apply, creating a workplace where everyone can succeed and thrive. Ready to take your career to the next level? 🚀 💻 Apply today👇 and join a team that’s shaping the future!! Hakkoda is an IBM subsidiary which has been acquired by IBM and will be integrated in the IBM organization. Hakkoda will be the hiring entity. By Proceeding with this application, you understand that Hakkoda will share your personal information with other IBM subsidiaries involved in your recruitment process, wherever these are located. More information on how IBM protects your personal information, including the safeguards in case of cross-border data transfer, are available here. Show more Show less

Posted 5 days ago

Apply

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Naukri logo

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. As GCP Data Engineer at Kyndryl, you will be responsible for designing and developing data pipelines, participating in architectural discussions, and implementing data solutions in a cloud environment using GCP data services. You will collaborate with global architects and business teams to design and deploy innovative solutions, supporting data analytics, automation, and transformation needs. Responsibilities: Design, develop, and maintain scalable data pipelines using GCP services such as BigQuery, Dataflow, Pub/Sub, and Cloud Storage. Participate in architectural discussions, conduct system analysis, and suggest optimal solutions that are scalable, future-proof, and aligned with business requirements. Collaborate with stakeholders to gather requirements and create high-level and detailed technical designs. Design data models suitable for both transactional and big data environments, supporting Machine Learning workflows. Build and optimize ETL/ELT infrastructure using a variety of data sources and GCP services. Develop and maintain Python / PySpark for data processing and integrate with GCP services for seamless data operations. Develop and optimize SQL queries for data analysis and reporting. Monitor and troubleshoot data pipeline issues to ensure timely resolution. Implement data governance and security best practices within GCP. Perform data quality checks and validation to ensure accuracy and consistency. Support DevOps automation efforts to ensure smooth integration and deployment of data pipelines. Provide design expertise in Master Data Management (MDM), Data Quality, and Metadata Management. Provide technical support and guidance to junior data engineers and other team members. Participate in code reviews and contribute to continuous improvement of data engineering practices. Implement best practices for cost management and resource utilization within GCP. If you're ready to embrace the power of data to transform our business and embark on an epic data adventure, then join us at Kyndryl. Together, let's redefine what's possible and unleash your potential. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Technical and Professional Experience: Bachelor’s or master’s degree in computer science, Engineering, or a related field with over 8 years of experience in data engineering More than 3 years of experience with the GCP data ecosystem Hands-on experience and Strong proficiency in GCP components such as Dataflow, Dataproc, BigQuery, Cloud Functions, Composer, Data Fusion. Excellent command of SQL with the ability to write complex queries and perform advanced data transformation. Strong programming skills in PySpark and/or Python, specifically for building cloud-native data pipelines. Familiarity with GCP tools like Looker, Airflow DAGs, Data Studio, App Maker, etc. Hands-on experience implementing enterprise-wide cloud data lake and data warehouse solutions on GCP. Knowledge of data governance, security, and compliance best practices. Experience with private and public cloud architectures, pros/cons, and migration considerations. Excellent problem-solving, analytical, and critical thinking skills. Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail. Communication Skills: Must be able to communicate with both technical and nontechnical. Able to derive technical requirements with the stakeholders. Ability to work independently and in agile teams. Preferred Technical And Professional Experience GCP Data Engineer Certification is highly preferred. Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization. Experience working as a Data Engineer and/or in cloud modernization. Knowledge of Databricks, Snowflake, for data analytics. Experience in NoSQL databases Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Familiarity with BI dashboards and Google Data Studio is a plus. Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 5 days ago

Apply

1.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Exciting Opportunity at Eloelo: Join the Future of Live Streaming and Social Gaming! Are you ready to be a part of the dynamic world of live streaming and social gaming? Look no further! Eloelo, an innovative Indian platform founded in February 2020 by ex-Flipkart executives Akshay Dubey and Saurabh Pandey, is on the lookout for passionate individuals to join our growing team in Bangalore. About Us: Eloelo stands at the forefront of multi-host video and audio rooms, offering a unique blend of interactive experiences, including chat rooms, PK challenges, audio rooms, and captivating live games like Lucky 7, Tambola, Tol Mol Ke Bol, and Chidiya Udd. Our platform has successfully attracted audiences from all corners of India, providing a space for social connections and immersive gaming. Recent Milestone: In pursuit of excellence, Eloelo has secured a significant milestone by raising $22Mn in the month of October 2023 from a diverse group of investors, including Lumikai, Waterbridge Capital, Courtside Ventures, Griffin Gaming Partners, and other esteemed new and existing contributors. Why Eloelo? Be a part of a team that thrives on creativity and innovation in the live streaming and social gaming space. Rub shoulders with the stars! Eloelo regularly hosts celebrities such as Akash Chopra, Kartik Aryan, Rahul Dua, Urfi Javed, and Kiku Sharda from the Kapil Sharma Show and that's our level of celebrity collaboration. Working with a world class team ,high performance team that constantly pushes boundaries and limits , redefines what is possible Fun and work at the same place with amazing work culture , flexible timings , and vibrant atmosphere We are looking to hire a business analyst to join our growth analytics team. This role sits at the intersection of business strategy, marketing performance, creative experimentation, and customer lifecycle management, with a growing focus on AI-led insights. You’ll drive actionable insights to guide our performance marketing, creative strategy, and lifecycle interventions, while also building scalable analytics foundations for a fast-moving growth team. About the Role: We are looking for a highly skilled and creative Data Scientist to join our growing team and help drive data-informed decisions across our entertainment platforms. You will leverage advanced analytics, machine learning, and predictive modeling to unlock insights about our audience, content performance, and product engagement—ultimately shaping the way millions of people experience entertainment. Key Responsibilities: Develop and deploy machine learning models to solve key business problems (e.g., personalization, recommendation systems, churn prediction). Analyze large, complex datasets to uncover trends in content consumption, viewer preferences, and engagement behaviors. Partner with product, marketing, engineering, and content teams to translate data insights into actionable strategies. Design and execute A/B and multivariate experiments to evaluate the impact of new features and campaigns. Build dashboards and visualizations to monitor key metrics and provide stakeholders with self-service analytics tools. Collaborate on the development of audience segmentation, lifetime value modeling, and predictive analytics. Stay current with emerging technologies and industry trends in data science and entertainment. Qualifications: Master’s or PhD in Computer Science, Statistics, Mathematics, Data Science, or related field. 1+ years of experience as a Data Scientist, ideally within media, streaming, gaming, or entertainment tech. Proficiency in programming languages such as Python or R. Strong SQL skills and experience working with large-scale datasets and data warehousing tools (e.g., Snowflake, BigQuery, Redshift). Experience with machine learning libraries/frameworks (e.g., scikit-learn, TensorFlow, PyTorch). Solid understanding of experimental design and statistical analysis techniques. Ability to clearly communicate complex technical findings to non-technical stakeholders. Preferred Qualifications: Experience building recommendation engines, content-ranking algorithms, or personalization models in an entertainment context. Familiarity with user analytics tools such as Mixpanel, Amplitude, or Google Analytics. Prior experience with data pipeline and workflow tools (e.g., Airflow, dbt). Background in natural language processing (NLP), computer vision, or audio analysis is a plus. Why Join Us: Shape the future of how audiences engage with entertainment through data-driven storytelling. Work with cutting-edge technology on high-impact, high-visibility projects. Join a collaborative team in a dynamic and fast-paced environment where creativity meets data science. Show more Show less

Posted 5 days ago

Apply

15.0 years

0 Lacs

Nagpur, Maharashtra, India

On-site

Linkedin logo

Job description Job Title: Tech Lead (AI/ML) – Machine Learning & Generative AI Location: Nagpur (Hybrid / On-site) Experience: 8–15 years Employment Type: Full-time Job Summary: We are seeking a highly experienced Python Developer with a strong background in traditional Machine Learning and growing proficiency in Generative AI to join our AI Engineering team. This role is ideal for professionals who have delivered scalable ML solutions and are now expanding into LLM-based architectures, prompt engineering, and GenAI productization. You’ll be working at the forefront of applied AI, driving both model performance and business impact across diverse use cases. Key Responsibilities: Design and develop ML-powered solutions for use cases in classification, regression, recommendation, and NLP. Build and operationalize GenAI solutions, including fine-tuning, prompt design, and RAG implementations using models such as GPT, LLaMA, Claude, or Gemini. Develop and maintain FastAPI-based services that expose AI models through secure, scalable APIs. Lead data modeling, transformation, and end-to-end ML pipelines, from feature engineering to deployment. Integrate with relational (MySQL) and vector databases (e.g., ChromaDB, FAISS, Weaviate) to support semantic search, embedding stores, and LLM contexts. Mentor junior team members and review code, models, and system designs for robustness and maintainability. Collaborate with product, data science, and infrastructure teams to translate business needs into AI capabilities. Optimize model and API performance, ensuring high availability, security, and scalability in production environments. Core Skills & Experience: Strong Python programming skills with 5+ years of applied ML/AI experience. Demonstrated experience building and deploying models using TensorFlow, PyTorch, scikit-learn, or similar libraries. Practical knowledge of LLMs and GenAI frameworks, including Hugging Face, OpenAI, or custom transformer stacks. Proficient in REST API design using FastAPI and securing APIs in production environments. Deep understanding of MySQL (query performance, schema design, transactions). Hands-on with vector databases and embeddings for search, retrieval, and recommendation systems. Strong foundation in software engineering practices: version control (Git), testing, CI/CD. Preferred/Bonus Experience: Deployment of AI solutions on cloud platforms (AWS, GCP, Azure). Familiarity with MLOps tools (MLflow, Airflow, DVC, SageMaker, Vertex AI). Experience with Docker, Kubernetes, and container orchestration. Understanding of prompt engineering, tokenization, LangChain, or multi-agent orchestration frameworks. Exposure to enterprise-grade AI applications in BFSI, healthcare, or regulated industries is a plus. What We Offer: Opportunity to work on a cutting-edge AI stack integrating both classical ML and advanced GenAI. High autonomy and influence in architecting real-world AI solutions. A dynamic and collaborative environment focused on continuous learning and innovation. Show more Show less

Posted 5 days ago

Apply

Exploring Airflow Jobs in India

The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Gurgaon

Average Salary Range

The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Career Path

In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead

Related Skills

In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing

Interview Questions

  • What is Apache Airflow? (basic)
  • Explain the key components of Airflow. (basic)
  • How do you schedule a DAG in Airflow? (basic)
  • What are the different operators in Airflow? (medium)
  • How do you monitor and troubleshoot DAGs in Airflow? (medium)
  • What is the difference between Airflow and other workflow management tools? (medium)
  • Explain the concept of XCom in Airflow. (medium)
  • How do you handle dependencies between tasks in Airflow? (medium)
  • What are the different types of sensors in Airflow? (medium)
  • What is a Celery Executor in Airflow? (advanced)
  • How do you scale Airflow for a high volume of tasks? (advanced)
  • Explain the concept of SubDAGs in Airflow. (advanced)
  • How do you handle task failures in Airflow? (advanced)
  • What is the purpose of a TriggerDagRun operator in Airflow? (advanced)
  • How do you secure Airflow connections and variables? (advanced)
  • Explain how to create a custom Airflow operator. (advanced)
  • How do you optimize the performance of Airflow DAGs? (advanced)
  • What are the best practices for version controlling Airflow DAGs? (advanced)
  • Describe a complex data pipeline you have built using Airflow. (advanced)
  • How do you handle backfilling in Airflow? (advanced)
  • Explain the concept of DAG serialization in Airflow. (advanced)
  • What are some common pitfalls to avoid when working with Airflow? (advanced)
  • How do you integrate Airflow with external systems or tools? (advanced)
  • Describe a challenging problem you faced while working with Airflow and how you resolved it. (advanced)

Closing Remark

As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies