Home
Jobs

2514 Airflow Jobs - Page 37

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 5.0 years

0 - 0 Lacs

Ahmedabad

On-site

GlassDoor logo

job Summary: We are looking for an experienced HVAC Technician with specialized knowledge in Air Handling Units (AHU) to join our mechanical team. The technician will be responsible for the installation, maintenance, troubleshooting, and repair of AHUs and other HVAC equipment in commercial and industrial settings. The ideal candidate will have a strong mechanical background and hands-on experience working with ventilation and air distribution systems. Key Responsibilities: Install, inspect, and maintain Air Handling Units (AHUs) and associated mechanical components (motors, filters, dampers, belts, fans, coils, etc.). Diagnose and repair faults in AHUs including airflow issues, temperature control, and motor failures. Perform preventive maintenance and servicing of AHUs, chillers, FCUs, and ducting systems. Monitor system performance and adjust controls to optimize efficiency. Replace or clean filters, coils, and fan assemblies as per maintenance schedules. Calibrate sensors, actuators, and other HVAC control devices. Ensure compliance with all mechanical safety regulations and HVAC standards. Interpret mechanical drawings, schematics, and service manuals. Maintain records of inspections, repairs, and maintenance work. Coordinate with facility teams or contractors during AHU upgrades or overhauls. Requirements: ITI / Diploma in Mechanical Engineering or HVAC Technician certification. Minimum 2–5 years of experience working with AHUs and HVAC systems in a mechanical role. Proficiency in identifying and resolving issues in AHU systems (airflow, pressure, temperature, mechanical wear). Knowledge of HVAC codes, health and safety regulations, and mechanical systems. Ability to work independently and handle tools/equipment safely. Strong documentation and communication skills. Preferred Qualifications: Experience in cleanroom, pharmaceutical, hospital, or commercial building AHU systems. Familiarity with BMS (Building Management Systems) and VFDs (Variable Frequency Drives). Certification in HVACR systems or AHU operation (as per local standards). Work Conditions: May involve working in mechanical rooms, rooftops, or confined spaces. Physical activity including lifting, climbing ladders, and standing for long periods. Occasional overtime, shift work, or emergency call-outs. Job Types: Full-time, Permanent Pay: ₹15,000.00 - ₹30,000.00 per month Schedule: Day shift Morning shift Application Question(s): How many years of experience do you have in AHU Experience: HVAC: 3 years (Required) Work Location: In person

Posted 1 week ago

Apply

13.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Reference # 312601BR Job Type Full Time Your role Do you want to drive the next level of application architecture in the Compliance IT space? Would you like to apply Agile at scale while developing state of the art Model Deployment and Execution Platform on Cloud? How about shaping the future of Compliance technology together with a fun-loving cross-functional team? We are looking for a seasoned azure cloud data engineer to develop curated data pipeline to load, transform, egress the data on cloud. help us build state of the art solutions that underpin M&S surveillance models and alert generation using cutting edge technologies like Scala, Spark, Databricks, Azure Data Lake, Airflow. demonstrate superior analytical and problem-solving skills demonstrate design skills and should have various design patterns knowledge demonstrate superior collaboration skills in working closely with other development, testing and implementation teams to roll-out important regulatory and business improvement programs Your team You will be a key member of a young and expanding team that is part of the Compliance Technology function. We are a small friendly bunch who take pride in the quality of work that we do. As a team, we provide AI based solutions on top of a big data platform, working closely with peers from the business led data science team, as well as other engineering teams. Your expertise have a degree level education; preferably Computer Science (Bachelor or Master’s degree) have 13+ years of hands-on design and development experience in several of the relevant technology areas (Scala, Azure Data Lake, Apache Spark, Airflow, Oracle/Postgres etc.) strong coding skills in Scala, Spark (must have) with ETL paradigms understanding have Agile, Test Driven Development and DevOps practices part of your DNA experience in developing the application using various design patterns experience in working in a MS Azure (added advantage) experience in designing the solution from scratch experience in working with global team experience in coding skills in python is an added advantage demonstrate strong communication skills, both to senior management and teams background in IB or Finance domain with good understanding of finance principles (added advantage) strong analytical and problem solving skills collaborative approach towards problem solving, working closely with other colleagues in the global team and sensitive towards diversity About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. Your Career Comeback We are open to applications from career returners. Find out more about our program on ubs.com/careercomeback. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About the Role We are seeking a highly skilled and versatile Senior AI Engineer with over 5 years of hands-on experience to join our client’s team in Pune. This role focuses on designing, developing, and deploying cutting-edge AI and machine learning solutions for high-scale, high-concurrency applications where security, scalability, and performance are paramount. You will work closely with cross-functional teams, including data scientists, DevOps engineers, security specialists, and business stakeholders, to deliver robust AI solutions that drive measurable business impact in dynamic, large-scale environments. Key Responsibilities Architect, develop, and deploy advanced machine learning and deep learning models across domains like NLP, computer vision, predictive analytics, or reinforcement learning, ensuring scalability and performance under high-traffic conditions. Preprocess, clean, and analyze large-scale structured and unstructured datasets using advanced statistical, ML, and big data techniques. Collaborate with data engineering and DevOps teams to integrate AI/ML models into production-grade pipelines, ensuring seamless operation under high concurrency. Optimize models for latency, throughput, accuracy, and resource efficiency, leveraging distributed computing and parallel processing where necessary. Implement robust security measures, including data encryption, secure model deployment, and adherence to compliance standards (e.g., GDPR, CCPA). Partner with client-side technical teams to translate complex business requirements into scalable, secure AI-driven solutions. Stay at the forefront of AI/ML advancements, experimenting with emerging tools, frameworks, and techniques (e.g., generative AI, federated learning, or AutoML). Write clean, modular, and maintainable code, along with comprehensive documentation and reports for model explainability, reproducibility, and auditability. Proactively monitor and maintain deployed models, ensuring reliability and performance in production environments with millions of concurrent users. Required Qualifications Bachelor’s or master’s degree in computer science, Machine Learning, Data Science, or a related technical field. 3 to 5 years of experience building and deploying AI/ML models in production environments with high-scale traffic and concurrency. Advanced proficiency in Python and modern AI/ML frameworks, including TensorFlow, PyTorch, Scikit-learn, and JAX. Hands-on expertise in at least two of the following domains: NLP, computer vision, time-series forecasting, or generative AI. Deep understanding of the end-to-end ML lifecycle, including data preprocessing, feature engineering, hyperparameter tuning, model evaluation, and deployment. Proven experience with cloud platforms (AWS, GCP, or Azure) and their AI/ML services (e.g., SageMaker, Vertex AI, or Azure ML). Strong knowledge of containerization (Docker, Kubernetes) and RESTful API development for secure and scalable model deployment. Familiarity with secure coding practices, data privacy regulations, and techniques for safeguarding AI systems against adversarial attacks. Preferred Skills Expertise in MLOps frameworks and tools such as MLflow, Kubeflow, or SageMaker for streamlined model lifecycle management. Hands-on experience with large language models (LLMs) or generative AI frameworks (e.g., Hugging Face Transformers, LangChain, or Llama). Proficiency in big data technologies and orchestration tools (e.g., Apache Spark, Airflow, or Kafka) for handling massive datasets and real-time pipelines. Experience with distributed training techniques (e.g., Horovod, Ray, or TensorFlow Distributed) for large-scale model development. Knowledge of CI/CD pipelines and infrastructure-as-code tools (e.g., Terraform, Ansible) for scalable and automated deployments. Familiarity with security frameworks and tools for AI systems, such as model hardening, differential privacy, or encrypted computation. Proven ability to work in global, client-facing roles, with strong communication skills to bridge technical and business teams. Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Bhubaneswar, Odisha, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Jamshedpur, Jharkhand, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Raipur, Chhattisgarh, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

15.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We’re in an unbelievably exciting area of tech and are fundamentally reshaping the data storage industry. Here, you lead with innovative thinking, grow along with us, and join the smartest team in the industry. This type of work—work that changes the world—is what the tech industry was founded on. So, if you're ready to seize the endless opportunities and leave your mark, come join us. About The Role We are seeking a Senior Manager – Data & Analytics to lead enterprise-scale data science and analytics initiatives, focused on activating curated datasets and modernizing the organization's data infrastructure. This role will lead the strategy, design, and implementation of scalable analytics models in partnership with our enterprise data warehouse and big data platforms. The ideal candidate will combine deep technical expertise in data science and engineering with the business acumen to influence senior stakeholders and drive high-impact decisions. Key Responsibilities  Team Management :Direct and mentor the data science team to design, build, and deploy advanced analytics models and solutions  Data Pipeline :Design scalable pipelines and workflows for large-scale data processing with high reliability and performance  Model Development:Oversee development of ML/AI-driven predictive and prescriptive models with a focus on operationalization.  Big Data Strategy: Drive scalable analytics solutions using Spark, Hadoop, Snowflake, S3, and cloud-native big data architectures  Code Optimization: Supervise automation and optimization of data integration and analysis workflows using SQL, Python, and modern tools  Cloud Management: Manage datasets on Snowflake and similar platforms with emphasis on governance and best practices.  Model Maintenance: Define practices for model monitoring, retraining, and documentation to ensure long-term relevance and compliance.  Stakeholder Engagement: Collaborate with stakeholders to understand needs, prioritize projects, and create solutions that drive measurable outcomes  Continuous Improvement: Champion innovation by integrating emerging technologies and techniques into the team’s toolkit.  Drive a culture of continuous improvement by staying abreast of advancements in data science and integrating innovative methods into workflows.  Mentorship: Foster growth, collaboration, and knowledge sharing within the data science team and across the broader analytics community. Basic Qualifications  Masters OR Ph.D. with 15 years, with 10+ years of relevant experience in Data Science , Statistics, operational research or related field.  Hands-on experience with machine learning models, both supervised and unsupervised, in large- scale production settings  Proficiency in Python, SQL, and modern ML frameworks.  Extensive experience with big data technologies such as Hadoop, Spark, MapReduce , and Snowflake.  Track record of translating data into business impact and influencing senior stakeholders.  Strong foundation in data modeling and governance aligned with data warehouse best practices.  Excellent written and verbal communication skills. Preferred Qualifications  Experience with orchestration tools (e.g., Airflow, dbt).  Familiarity with BI/visualization tools such as Tableau, Looker, or Power BI.  Experience working with cross-functional business units  Background in building and leading enterprise-level data science or advanced analytics programs.  Understanding of ethical implications and governance practices related to data science and ML. What You Can Expect From Us Pure Innovation: We celebrate those who think critically, like a challenge and aspire to be trailblazers. Pure Growth: We give you the space and support to grow along with us and to contribute to something meaningful. We have been Named Fortune's Best Large Workplaces in the Bay Area™, Fortune's Best Workplaces for Millennials™ and certified as a Great Place to Work®! Pure Team: We build each other up and set aside ego for the greater good. And because we understand the value of bringing your full and best self to work, we offer a variety of perks to manage a healthy balance, including flexible time off, wellness resources and company-sponsored team events. Check out purebenefits.com for more information. Accommodations And Accessibility Candidates with disabilities may request accommodations for all aspects of our hiring process. For more on this, contact us at TA-Ops@purestorage.com if you’re invited to an interview. Where Differences Fuel Innovation We’re forging a future where everyone finds their rightful place and where every voice matters. Where uniqueness isn’t just accepted but embraced. That’s why we are committed to fostering the growth and development of every person, cultivating a sense of community through our Employee Resource Groups and advocating for inclusive leadership. At Pure Storage, diversity, equity, inclusion and sustainability are part of our DNA because we believe our people will shape the next chapter of our success story. Pure Storage is proud to be an equal opportunity employer. We strongly encourage applications from Indigenous Peoples, racialized people, people with disabilities, people from gender and sexually diverse communities, and people with intersectional identities. We also encourage you to apply even if you feel you don’t match all of the role criteria. If you think you can do the job and feel you’re a good match, please apply. Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Greater Hyderabad Area

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing… We are looking for data engineers who can work with world class team members to help drive telecom business to its full potential. We are building data products / assets for telecom wireless and wireline business which includes consumer analytics, telecom network performance and service assurance analytics etc. We are working on cutting edge technologies like digital twin to build these analytical platforms and provide data support for varied AI ML implementations. As a data engineer you will be collaborating with business product owners, coaches, industry renowned data scientists and system architects to develop strategic data solutions from sources which includes batch, file and data streams As a Data Engineer with ETL/ELT expertise for our growing data platform & analytics teams, you will understand and enable the required data sets from different sources both structured and unstructured data into our data warehouse and data lake with real-time streaming and/or batch processing to generate insights and perform analytics for business teams within Verizon. Understanding the business requirements and converting them to technical design. Working on Data Ingestion, Preparation and Transformation. Developing data streaming applications. Debugging the production failures and identifying the solution. Working on ETL/ELT development. Understanding devops process and contributing for devops pipelines What We’re Looking For... You’re curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solving business problems. You’ll need to have… Bachelor’s degree or four or more years of work experience. Four or more years of work experience. Experience with Data Warehouse concepts and Data Management life cycle. Experience in any DBMS Experience in Shell scripting, Spark, Scala. Experience in GCP/BigQuery, composer, Airflow. Experience in real time streaming Experience in DevOps Even better if you have if you have one or more of the following… Three or more years of relevant experience. Any relevant Certification on ETL/ELT developer. Certification in GCP-Data Engineer. Accuracy and attention to detail. Good problem solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to and influence stakeholders. Experience in driving a small team of 2 or more members for technical delivery #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Kochi, Kerala, India

Remote

Linkedin logo

Role Description Job Title: Lead ML-Ops Engineer – GenAI & Scalable ML Systems Location: Any UST Job Type: Full-Time Experience Level: Senior / Lead Role Overview We are seeking a Lead ML-Ops Engineer to spearhead the end-to-end operationalization of machine learning and Generative AI models across our platforms. You will play a pivotal role in building robust, scalable ML pipelines, embedding responsible AI governance, and integrating innovative GenAI techniques—such as Retrieval-Augmented Generation (RAG) and LLM-based applications —into real-world systems. You will collaborate with cross-functional teams of data scientists, data engineers, product managers, and business stakeholders to ensure AI solutions are production-ready, resilient, and aligned with strategic business goals. A strong background in Dataiku or similar platforms is highly preferred. Key Responsibilities Model Development & Deployment Design, implement, and manage scalable ML pipelines using CI/CD practices. Operationalize ML and GenAI models, ensuring high availability, observability, and reliability. Automate data and model validation, versioning, and monitoring processes. Technical Leadership & Mentorship Act as a thought leader and mentor to junior engineers and data scientists on ML-Ops best practices. Define architecture standards and promote engineering excellence across ML-Ops workflows. Innovation & Generative AI Strategy Lead the integration of GenAI capabilities such as RAG and large language models (LLMs) into applications. Identify opportunities to drive business impact through cutting-edge AI technologies and frameworks. Governance & Compliance Implement governance frameworks for model explainability, bias detection, reproducibility, and auditability. Ensure compliance with data privacy, security, and regulatory standards in all ML/AI solutions. Must-Have Skills 5+ years of experience in ML-Ops, Data Engineering, or Machine Learning. Proficiency in Python, Docker, Kubernetes, and cloud services (AWS/GCP/Azure). Hands-on experience with CI/CD tools (e.g., GitHub Actions, Jenkins, MLflow, or Kubeflow). Deep knowledge of ML pipeline orchestration, model lifecycle management, and monitoring tools. Experience with LLM frameworks (e.g., LangChain, HuggingFace Transformers) and GenAI use cases like RAG. Strong understanding of responsible AI and MLOps governance best practices. Proven ability to work cross-functionally and lead technical discussions. Good-to-Have Skills Experience with Dataiku DSS or similar platforms (e.g., DataRobot, H2O.ai). Familiarity with vector databases (e.g., FAISS, Pinecone, Weaviate) for GenAI retrieval tasks. Exposure to tools like Apache Airflow, Argo Workflows, or Prefect for orchestration. Understanding of ML evaluation metrics in a production context (drift detection, data integrity checks). Experience in mentoring, technical leadership, or project ownership roles. Why Join Us? Be at the forefront of AI innovation and shape how cutting-edge technologies drive business transformation. Join a collaborative, forward-thinking team with a strong emphasis on impact, ownership, and learning. Competitive compensation, remote flexibility, and opportunities for career advancement. Skills Artificial Intelligence,Python,ML-OPS Show more Show less

Posted 1 week ago

Apply

5.0 years

50 Lacs

Gurugram, Haryana, India

Remote

Linkedin logo

Experience : 5.00 + years Salary : INR 5000000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Permanent position(Payroll and Compliance to be managed by: Precanto) (*Note: This is a requirement for one of Uplers' client - A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams.) What do you need for this opportunity? Must have skills required: async workflows, MLOps, Ray Tune, Data Engineering, MLFlow, Supervised Learning, Time-Series Forecasting, Docker, machine_learning, NLP, Python, SQL A fast-growing, VC-backed B2B SaaS platform revolutionizing financial planning and analysis for modern finance teams. is Looking for: We are a fast-moving startup building AI-driven solutions to the financial planning workflow. We’re looking for a versatile Machine Learning Engineer to join our team and take ownership of building, deploying, and scaling intelligent systems that power our core product. Job Description- Full-time Team: Data & ML Engineering We’re looking for 5+ years of experience as a Machine Learning or Data Engineer (startup experience is a plus) What You Will Do- Build and optimize machine learning models — from regression to time-series forecasting Work with data pipelines and orchestrate training/inference jobs using Ray, Airflow, and Docker Train, tune, and evaluate models using tools like Ray Tune, MLflow, and scikit-learn Design and deploy LLM-powered features and workflows Collaborate closely with product managers to turn ideas into experiments and production-ready solutions Partner with Software and DevOps engineers to build robust ML pipelines and integrate them with the broader platform Basic Skills Proven ability to work creatively and analytically in a problem-solving environment Excellent communication (written and oral) and interpersonal skills Strong understanding of supervised learning and time-series modeling Experience deploying ML models and building automated training/inference pipelines Ability to work cross-functionally in a collaborative and fast-paced environment Comfortable wearing many hats and owning projects end-to-end Write clean, tested, and scalable Python and SQL code Leverage async workflows and cloud-native infrastructure (S3, Docker, etc.) for high-throughput data processing. Advanced Skills Familiarity with MLOps best practices Prior experience with LLM-based features or production-level NLP Experience with LLMs, vector stores, or prompt engineering Contributions to open-source ML or data tools TECH STACK Languages: Python, SQL Frameworks & Tools: scikit-learn, Prophet, pyts, MLflow, Ray, Ray Tune, Jupyter Infra: Docker, Airflow, S3, asyncio, Pydantic How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you! Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Experience in the design and implementation of big data systems using pyspark, database migrations, transformation, and integration solutions for any Data engineering project. • Must have excellent knowledge in Apache spark and python programming language. • Deep experience in developing data processing tasks using pyspark such as reading data from external sources, merging data, data enrichment and loading into target destinations. • should have experience in integrating pyspark with downstream and upstream applications through real/batch processing interface. • should have experience in fine tuning process and troubleshooting performance issues. • experience in deployment of the code and scheduling tools like Airflow, Control M Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What You’ll Be Doing… We are looking for data engineers who can work with world class team members to help drive telecom business to its full potential. We are building data products / assets for telecom wireless and wireline business which includes consumer analytics, telecom network performance and service assurance analytics etc. We are working on cutting edge technologies like digital twin to build these analytical platforms and provide data support for varied AI ML implementations. As a data engineer you will be collaborating with business product owners, coaches, industry renowned data scientists and system architects to develop strategic data solutions from sources which includes batch, file and data streams As a Data Engineer with ETL/ELT expertise for our growing data platform & analytics teams, you will understand and enable the required data sets from different sources both structured and unstructured data into our data warehouse and data lake with real-time streaming and/or batch processing to generate insights and perform analytics for business teams within Verizon. Understanding the business requirements and converting them to technical design. Working on Data Ingestion, Preparation and Transformation. Developing data streaming applications. Debugging the production failures and identifying the solution. Working on ETL/ELT development. Understanding devops process and contributing for devops pipelines What We’re Looking For... You’re curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solving business problems. You’ll need to have… Bachelor’s degree or four or more years of work experience. Four or more years of work experience. Experience with Data Warehouse concepts and Data Management life cycle. Experience in any DBMS Experience in Shell scripting, Spark, Scala. Experience in GCP/BigQuery, composer, Airflow. Experience in real time streaming Experience in DevOps Even better if you have if you have one or more of the following… Three or more years of relevant experience. Any relevant Certification on ETL/ELT developer. Certification in GCP-Data Engineer. Accuracy and attention to detail. Good problem solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to and influence stakeholders. Experience in driving a small team of 2 or more members for technical delivery #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are seeking a seasoned Systems Architect with significant expertise in DevOps and MLOps operational methodologies to join our team. The candidate will be central to optimizing our development cycles, enhancing our software and AI/ML operations, and leading the transformation of our systems architecture to maximize efficiency and reliability within the organization. This role involves direct hands-on experience in building, automating, and maintaining our AI/ML workflows, along with a strong foundation in cloud infrastructure and continuous integration and delivery systems. Responsibilities Shorten development cycles for software and AI/ML systems Maintain tools and infrastructure for efficient AI/ML development Automate the AI/ML workflow from data analysis and operationalization to training and visualization Build and oversee data pipelines for analytics, model evaluation, and training Train and re-train systems as needed Enhance and support the automated CI/CD pipeline Increase deployment velocity for models and data pipelines Construct and manage scalable infrastructure as code (IaC) in the cloud Collaborate with the engineering team for seamless development and maintenance of products Promote continuous learning and best practices within the team Undertake significant responsibilities from day one to directly influence our CI/CD+ infrastructure Requirements 10+ years of strong experience in MLOps Solid experience in designing DevOps pipelines suitable for various developmental stages Experience in providing solutions for production issues on ML models and in handling monitoring/alerting systems Competency in selecting datasets and conducting statistical analysis from ML tests Proficiency in Python, Java, or similar programming languages Skills in shell scripting and Unix OS Solid understanding of software engineering good practices, particularly in DevOps Strong experience with cloud infrastructure on platforms such as GCP, AWS, or Azure. GCP preferred Familiarity with Apache Airflow, DVC, MLFlow, or similar tools Excellent problem-solving, debugging, and interpersonal skills Qualifications ideally include a certification in Cloud DevOps, Architecture, or Engineering from top cloud platforms (AWS, Azure, GCP) Computer Science graduate (BTech/BE or higher) Nice to have Familiarity with data science and ML concepts Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are looking for a dedicated and proficient Senior Data DevOps Engineer with extensive MLOps knowledge to enhance our team. The ideal candidate should possess a comprehensive knowledge of data engineering, data pipeline automation, and machine learning model operationalization. The role demands a cooperative professional skilled in designing, deploying, and managing extensive data and ML pipelines in alignment with organizational objectives. Responsibilities Develop, deploy, and manage Continuous Integration/Continuous Deployment (CI/CD) pipelines for data integration and machine learning model deployment Set up and sustain infrastructure for data processing and model training through cloud-based resources and services Automate processes for data validation, transformation, and workflow orchestration Work closely with data scientists, software engineers, and product teams for a smooth integration of ML models into production Enhance model serving and monitoring to boost performance and dependability Manage data versioning, lineage tracking, and the reproducibility of ML experiments Actively search for enhancements in deployment processes, scalability, and infrastructure resilience Implement stringent security protocols to safeguard data integrity and compliance with regulations Troubleshoot and solve issues throughout the data and ML pipeline lifecycle Requirements Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related field 4+ years of experience in Data DevOps, MLOps, or similar roles Proficiency in cloud platforms such as Azure, AWS, or GCP Background in Infrastructure as Code (IaC) tools like Terraform, CloudFormation, or Ansible Expertise in containerization and orchestration technologies including Docker and Kubernetes Hands-on experience with data processing frameworks such as Apache Spark and Databricks Proficiency in programming languages including Python with an understanding of data manipulation and ML libraries like Pandas, TensorFlow, and PyTorch Familiarity with CI/CD tools including Jenkins, GitLab CI/CD, and GitHub Actions Experience with version control tools and MLOps platforms such as Git, MLflow, and Kubeflow Strong understanding of monitoring, logging, and alerting systems including Prometheus and Grafana Excellent problem-solving abilities with capability to work independently and in teams Strong skills in communication and documentation Nice to have Background in DataOps concepts and tools such as Airflow and dbt Knowledge of data governance platforms like Collibra Familiarity with Big Data technologies including Hadoop and Hive Certifications in cloud platforms or data engineering Show more Show less

Posted 1 week ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Position Summary The Senior Data Engineer leads complex data engineering projects working on designing data architectures that align with business requirements This role focuses on optimizing data workflows managing data pipelines and ensuring the smooth operation of data systems Minimum Qualifications 8 Years overall IT experience with minimum 5 years of work experience in below tech skills Tech Skill Strong experience in Python Scripting and PySpark for data processing Proficiency in SQL dealing with big data over Informatica ETL Proven experience in Data quality and data optimization of data lake in Iceberg format with strong understanding of architecture Experience in AWS Glue jobs Experience in AWS cloud platform and its data services S3 Redshift Lambda EMR Airflow Postgres SNS Event bridge Expertise in BASH Shell scripting Strong understanding of healthcare data systems and experience leading data engineering teams Experience in Agile environments Excellent problem solving skills and attention to detail Effective communication and collaboration skills Responsibilities Leads development of data pipelines and architectures that handle large scale data sets Designs constructs and tests data architecture aligned with business requirements Provides technical leadership for data projects ensuring best practices and high quality data solutions Collaborates with product finance and other business units to ensure data pipelines meet business requirements Work with DBT Data Build Tool for transforming raw data into actionable insights Oversees development of data solutions that enable predictive and prescriptive analytics Ensures the technical quality of solutions managing data as it moves across environments Aligns data architecture to Healthfirst solution architecture Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Job Requisition ID # 25WD86559 Position Overview We are looking for an exceptional Analytics Engineer to transform, optimize, test, and maintain architectures for enterprise analytics databases, data pipelines, and processing systems, as well as optimizing data flow and collection for teams. The mission of the team is to empower decision makers through trusted data assets and scalable self-serve analytics. You will engineer new pipelines and maintain, create frameworks, enhance existing data pipeline with new features to ensure accurate data delivery to stakeholders in a timely manner, also support ad-hoc reporting requirements that facilitate data-driven actionable insights at Autodesk. You will join the CTS Data & Analytics (D&A) team and report to the senior manager of the team. The group has Bi Developers, analysts, and data scientists who support the Customer Technical Success Organization. As part of the team, you will have the opportunity to work on initiatives that impact CTS overall, as well as the specific functional area that you support. Signs you are a good fit for the role. You’re a true data ninja – you can creatively mine multiple disparate datasets to glean insights, answer business questions and provide next best actions You want to understand Customer Success and Product Insights – you’re not just satisfied with building a cool model, you also want to make sure it’s relevant in the field. You help tell compelling stories using data and reporting tools– weave together a narrative of what is happening in the business based on your analysis. You engage business leaders to determine how to best answer their questions through analysis and data science methodologies Key Responsibilities Maintain/develop data pipelines required for the extraction, transformation, cleaning, pre-processing, aggregation and loading of data from a wide variety of data sources using Python, SQL, DBT, and other data technologies Design, implement, test and maintain data pipelines/ new features based on stakeholders' requirements Develop/maintain scalable, available, quality assured analytical building blocks/datasets by close coordination with data analysts Optimize/ maintain workflows/ scripts on present data warehouses and present ETL Design / develop / maintain components of data processing frameworks Build and maintain data quality and durability tracking mechanisms to provide visibility into and address inevitable changes in data ingestion, processing, and storage Collaborate with stakeholders to define data requirements and objectives. Translate technical designs into business appropriate representations and analyse business needs and requirements ensuring implementation of data services directly correlates to the strategy and growth of the business Address questions from downstream data consumers through appropriate channels Create data tools for analytics and BI teams that assist them in building and optimizing our product into an innovative industry leader Stay up to date with data engineering best practices, patterns, evaluate and analyze new technologies, capabilities, open-source software in context of our data strategy to ensure we are adapting our own core technologies to stay ahead of the industry Contribute to Analytics engineering process Required Qualifications 5+ Years Relevant Work Experience BA / BS in Data Science, Computer Science, Statistics, Mathematics, or a related field Built processes supporting data transformation, data structures, metadata, dependency, data quality, and workload management Experience with Snowflake, Hands-on experience with Snowflake utilities, Snow SQL, Snow Pipe. Must have worked on Snowflake Cost optimization scenarios. Overall solid programming skills, able to write modular, maintainable code, preferably Python & SQL Have experience with workflow management solutions like Airflow Have experience on Data transformations tools like DBT Experience working with Git Experience working with big data environment, like, Hive, Spark and Presto Ready to work flexible hours Preferred Requirements Experience supporting Support, Customer Success, DAG Airflows Knowledge of natural language processing (NLP) and computer vision techniques. Familiarity with version control systems (e.g., Git). Snowflake DBT Working knowledge of Power BI AWS environment, for example S3, Lambda, Glue, Cloud watch Basic understanding of Salesforce Experience working with remote teams spread across multiple time-zones Have a hunger to learn and the ability to operate in a self-guided manner Learn More About Autodesk Welcome to Autodesk! Amazing things are created every day with our software – from the greenest buildings and cleanest cars to the smartest factories and biggest hit movies. We help innovators turn their ideas into reality, transforming not only how things are made, but what can be made. We take great pride in our culture here at Autodesk – our Culture Code is at the core of everything we do. Our values and ways of working help our people thrive and realize their potential, which leads to even better outcomes for our customers. When you’re an Autodesker, you can be your whole, authentic self and do meaningful work that helps build a better future for all. Ready to shape the world and your future? Join us! Salary transparency Salary is one part of Autodesk’s competitive compensation package. Offers are based on the candidate’s experience and geographic location. In addition to base salaries, we also have a significant emphasis on discretionary annual cash bonuses, commissions for sales roles, stock or long-term incentive cash grants, and a comprehensive benefits package. Sales Careers Working in sales at Autodesk allows you to build meaningful relationships with customers while growing your career. Join us and help make a better, more sustainable world. Learn more here: https://www.autodesk.com/careers/sales Diversity & Belonging We take pride in cultivating a culture of belonging and an equitable workplace where everyone can thrive. Learn more here: https://www.autodesk.com/company/diversity-and-belonging Are you an existing contractor or consultant with Autodesk? Please search for open jobs and apply internally (not on this external site). Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

We are seeking a highly skilled and motivated Lead DS/ML engineer to join our team. The role is critical to the development of a cutting-edge reporting platform designed to measure and optimize online marketing campaigns. We are seeking a highly skilled Data Scientist / ML Engineer with a strong foundation in data engineering (ELT, data pipelines) and advanced machine learning to develop and deploy sophisticated models. The role focuses on building scalable data pipelines, developing ML models, and deploying solutions in production to support a cutting-edge reporting, insights, and recommendations platform for measuring and optimizing online marketing campaigns. The ideal candidate should be comfortable working across data engineering, ML model lifecycle, and cloud-native technologies. Job Description: Key Responsibilities: Data Engineering & Pipeline Development Design, build, and maintain scalable ELT pipelines for ingesting, transforming, and processing large-scale marketing campaign data. Ensure high data quality, integrity, and governance using orchestration tools like Apache Airflow, Google Cloud Composer, or Prefect. Optimize data storage, retrieval, and processing using BigQuery, Dataflow, and Spark for both batch and real-time workloads. Implement data modeling and feature engineering for ML use cases. Machine Learning Model Development & Validation Develop and validate predictive and prescriptive ML models to enhance marketing campaign measurement and optimization. Experiment with different algorithms (regression, classification, clustering, reinforcement learning) to drive insights and recommendations. Leverage NLP, time-series forecasting, and causal inference models to improve campaign attribution and performance analysis. Optimize models for scalability, efficiency, and interpretability. MLOps & Model Deployment Deploy and monitor ML models in production using tools such as Vertex AI, MLflow, Kubeflow, or TensorFlow Serving. Implement CI/CD pipelines for ML models, ensuring seamless updates and retraining. Develop real-time inference solutions and integrate ML models into BI dashboards and reporting platforms. Cloud & Infrastructure Optimization Design cloud-native data processing solutions on Google Cloud Platform (GCP), leveraging services such as BigQuery, Cloud Storage, Cloud Functions, Pub/Sub, and Dataflow. Work on containerized deployment (Docker, Kubernetes) for scalable model inference. Implement cost-efficient, serverless data solutions where applicable. Business Impact & Cross-functional Collaboration Work closely with data analysts, marketing teams, and software engineers to align ML and data solutions with business objectives. Translate complex model insights into actionable business recommendations. Present findings and performance metrics to both technical and non-technical stakeholders. Qualifications & Skills: Educational Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Machine Learning, Artificial Intelligence, Statistics, or a related field. Certifications in Google Cloud (Professional Data Engineer, ML Engineer) is a plus. Must-Have Skills: Experience: 5-10 years with the mentioned skillset & relevant hands-on experience Data Engineering: Experience with ETL/ELT pipelines, data ingestion, transformation, and orchestration (Airflow, Dataflow, Composer). ML Model Development: Strong grasp of statistical modeling, supervised/unsupervised learning, time-series forecasting, and NLP. Programming: Proficiency in Python (Pandas, NumPy, Scikit-learn, TensorFlow/PyTorch) and SQL for large-scale data processing. Cloud & Infrastructure: Expertise in GCP (BigQuery, Vertex AI, Dataflow, Pub/Sub, Cloud Storage) or equivalent cloud platforms. MLOps & Deployment: Hands-on experience with CI/CD pipelines, model monitoring, and version control (MLflow, Kubeflow, Vertex AI, or similar tools). Data Warehousing & Real-time Processing: Strong knowledge of modern data platforms for batch and streaming data processing. Nice-to-Have Skills: Experience with Graph ML, reinforcement learning, or causal inference modeling. Working knowledge of BI tools (Looker, Tableau, Power BI) for integrating ML insights into dashboards. Familiarity with marketing analytics, attribution modeling, and A/B testing methodologies. Experience with distributed computing frameworks (Spark, Dask, Ray). Location: Bengaluru Brand: Merkle Time Type: Full time Contract Type: Permanent Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

We are looking for a Data Engineer SDE-3 who can take ownership of designing, developing, and maintaining scalable and reliable data pipelines. You will play a critical role in shaping the data infrastructure that powers business insights, product intelligence, and scalable learning platforms at PW. Roles Open: Data Engineer SDE-2 and Data Engineer SDE-3 Location: Noida & Bangalore Key Responsibilities: Design and implement scalable, efficient ETL/ELT pipelines to ingest, transform, and process data from multiple sources. Architect and maintain robust data lake and warehouse solutions, aligning with business and analytical needs. Own the development and optimization of distributed data processing systems using Spark, AWS EMR, or similar technologies. Collaborate with cross-functional teams (data science, analytics, product) to gather requirements and implement data-driven solutions. Ensure high levels of data quality, security, and availability across systems. Evaluate emerging technologies and tools for data processing and workflow orchestration. Build reusable components, libraries, and frameworks to enhance engineering efficiency and reliability. Drive performance tuning, cost optimization, and automation of data infrastructure. Mentor junior engineers, review code, and set standards for development practices. Required Skills & Qualifications: 5+ years of professional experience in data engineering or backend systems with a focus on scalable systems. Strong hands-on experience with Python or Scala , and writing efficient, production-grade code. Deep understanding of data engineering concepts: data modeling, data warehousing, data lakes, streaming vs. batch processing, and metadata management. Solid experience with AWS (S3, Redshift, EMR, Glue, Lambda) or equivalent cloud platforms. Experience working with orchestration tools like Apache Airflow (preferred) or similar (Azkaban, Luigi). Proven expertise in working with big data tools such as Apache Spark , and managing Kubernetes clusters. Proficient in SQL and working with both relational (Postgres, Redshift) and NoSQL (MongoDB) databases. Ability to understand API-driven architecture and integrate with backend services as part of data pipelines. Strong problem-solving skills, with a proactive attitude towards ownership and continuous improvement. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

FORD Requirement - Order Number: 33929-26 L PA Chennai - Contract - Non-Hacker rank Notice Period - Immediate Joiners / serving upto 30 days Position Title: Specialty Development Consultant Duration: 658 days Interview Required: N Estimated Regular Hours: 40 Estimated Overtime Hours: 0 Division: Global Data Insight & Analytics Position Description > Train, Build and Deploy ML, DL Models > Software development using Python, > work with Tech Anchors, Product Managers and the Team internally and across other Teams > Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end > Software development using TDD approach > Experience using GCP products & services > Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Skills Required 3+ years of experience in Python software development 3+ years’ experience in Cloud technologies & services, preferably GCP 3+ years of experience of practicing statistical methods and their accurate application e.g. ANOVA, principal component analysis, correspondence analysis, k-means clustering, factor analysis, multi-variate analysis, Neural Networks, causal inference, Gaussian regression, etc. 3+ years’ experience with Python, SQL, BQ. Experience in SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Vertex AI, Airflow, TensorFlow, etc., Experience in Train, Build and Deploy ML, DL Models Ability to understand technical, functional, non-functional, security aspects of business requirements and delivering them end-to-end. Ability to adapt quickly with opensource products & tools to integrate with ML Platforms Building and deploying Models (Scikit learn, DataRobots, TensorFlow PyTorch, etc.) Developing and deploying On-Prem & Cloud environments Kubernetes, Tekton, OpenShift, Terraform, Vertex AI Skills Preferred Good Communication, Presentation and Collaboration Skills Experience Required 2 to 5 yrs Experience Preferred GCP products & services Education Required BE, BTech, MCA, M.Sc, ME Education Preferred Additional Information: HackerRank Test on Python, Cloud and Machine Learning is must. Skills: ml,cicd,bigquery,pytorch,python,sonarqube,kubernetes,gcp looker,scikit learn,tekton,gcs,cloud,vertex ai,gcp,tensorflow,datarobots,openshift,sql,cloud technologies,airflow,statistical methods,terraform Show more Show less

Posted 1 week ago

Apply

3.0 - 4.0 years

0 Lacs

Mumbai, Maharashtra, India

Remote

Linkedin logo

Job Title: Data Scientist – Computer Vision & Generative AI Location: Mumbai Experience Level: 3 to 4 years Employment Type: Full-time Industry: Renewable Energy / Solar Services Job Overview: We are seeking a talented and motivated Data Scientist with a strong focus on computer vision, generative AI, and machine learning to join our growing team in the solar services sector. You will play a pivotal role in building AI-driven solutions that transform how solar infrastructure is analyzed, monitored, and optimized using image-based intelligence. From drone and satellite imagery to on-ground inspection photos, your work will enable intelligent automation, predictive analytics, and visual understanding in critical areas like fault detection, panel degradation, site monitoring, and more. If you're passionate about working at the cutting edge of AI for real-world sustainability impact, we’d love to hear from you. Key Responsibilities: Design, develop, and deploy computer vision models for tasks such as object detection, classification, segmentation, anomaly detection, etc. Work with generative AI techniques (e.g., GANs, diffusion models) to simulate environmental conditions, enhance datasets, or create synthetic training data. Build ML pipelines for end-to-end model training, validation, and deployment using Python and modern ML frameworks. Analyze drone, satellite, and on-site images to extract meaningful insights for solar panel performance, wear-and-tear detection, and layout optimization. Collaborate with cross-functional teams (engineering, field ops, product) to understand business needs and translate them into scalable AI solutions. Continuously experiment with the latest models, frameworks, and techniques to improve model performance and robustness. Optimize image pipelines for performance, scalability, and edge/cloud deployment. Key Requirements: 3–4 years of hands-on experience in data science, with a strong portfolio of computer vision and ML projects. Proven expertise in Python and common data science libraries: NumPy, Pandas, Scikit-learn, etc. Proficiency with image-based AI frameworks: OpenCV , PyTorch or TensorFlow , Detectron2 , YOLOv5/v8 , MMDetection , etc. Experience with generative AI models like GANs , Stable Diffusion , or ControlNet for image generation or augmentation. Experience building and deploying ML models using MLflow , TorchServe , or TensorFlow Serving . Familiarity with image annotation tools (e.g., CVAT, Labelbox), and data versioning tools (e.g., DVC). Experience with cloud platforms ( AWS , GCP , or Azure ) for storage, training, or model deployment. Experience with Docker , Git , and CI/CD pipelines for reproducible ML workflows. Ability to write clean, modular code and a solid understanding of software engineering best practices in AI/ML projects. Strong problem-solving skills, curiosity, and ability to work independently in a fast-paced environment. Bonus / Preferred Skills: Experience with remote sensing and working with satellite or drone imagery. Exposure to MLOps practices and tools like Kubeflow , Airflow , or SageMaker Pipelines . Knowledge of solar technologies, photovoltaic systems, or renewable energy is a plus. Familiarity with edge computing for vision applications on IoT devices or drones. Application Instructions: Please submit your resume, portfolio (GitHub, blog, or project links), and a short cover letter explaining why you’re interested in this role to khushboo.b@solarsquare.in or sidhant.c@solarsquare.in Show more Show less

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

FORD Requirement - Order Number: 34170-23 L PA Chennai - Contract - Position Title: Architect Senior Target Start Date: 01-JUL-2025 Original Duration: 334 Days Notice Period - Immediate Joiners / Serving upto 30 days Work Hours: 02:00 PM to 11:30 PM Standard Shift: Night Travel Required? N Travel %: 0 Division Position Description: Materials Management Platform (MMP) is a multi-year transformation initiative aimed at transforming Ford's Materials Requirement Planning & Inventory Management capabilities. This is part of a larger Industrial Systems IT Transformation effort. This position responsibility is to design & deploy Data Centric Architecture in GCP for Materials Management platform which would get / give data from multiple applications modern & Legacy in Product Development, Manufacturing, Finance, Purchasing, N-Tier Supply Chain, Supplier Collaboration Skills Required GCP, Data Architecture Skills Preferred Cloud Architecture Experience Required 8 to 12 years Experience Preferred Requires a bachelor’s or foreign equivalent degree in computer science, information technology or a technology related field 8 years of professional experience in: o Data engineering, data product development and software product launches o At least three of the following languages: Java, Python, Spark, Scala, SQL and experience performance tuning. 4 years of cloud data/software engineering experience building scalable, reliable, and cost-effective production batch and streaming data pipelines using: o Data warehouses like Google BigQuery. o Workflow orchestration tools like Airflow. o Relational Database Management System like MySQL, PostgreSQL, and SQL Server. o Real-Time data streaming platform like Apache Kafka, GCP Pub/Sub o Microservices architecture to deliver large-scale real-time data processing application. o REST APIs for compute, storage, operations, and security. o DevOps tools such as Tekton, GitHub Actions, Git, GitHub, Terraform, Docker. o Project management tools like Atlassian JIRA Automotive experience is preferred Support in an onshore/offshore model is preferred Excellent at problem solving and prevention. Knowledge and practical experience of agile delivery Education Required Bachelor's Degree Education Preferred Certification Program Additional Safety Training/Licensing/Personal Protection Requirements Additional Information : Design and implement data-centric solutions on Google Cloud Platform (GCP) using various GCP tools like Big Query, Google Cloud Storage, Cloud SQL, Memory Store, Dataflow, Dataproc, Artifact Registry, Cloud Build, Cloud Run, Vertex AI, Pub/Sub, GCP APIs. Build ETL pipelines to ingest the data from heterogeneous sources into our system Develop data processing pipelines using programming languages like Java and Python to extract, transform, and load (ETL) data Create and maintain data models, ensuring efficient storage, retrieval, and analysis of large datasets Deploy and manage databases, both SQL and NoSQL, such as Bigtable, Firestore, or Cloud SQL, based on project requirements Optimize data workflows for performance, reliability, and cost-effectiveness on the GCP infrastructure. Implement version control and CI/CD practices for data engineering workflows to ensure reliable and efficient deployments. Utilize GCP monitoring and logging tools to proactively identify and address performance bottlenecks and system failures Troubleshoot and resolve issues related to data processing, storage, and retrieval. Promptly address code quality issues using SonarQube, Checkmarx, Fossa, and Cycode throughout the development lifecycle Implement security measures and data governance policies to ensure the integrity and confidentiality of data Collaborate with stakeholders to gather and define data requirements, ensuring alignment with business objectives. Develop and maintain documentation for data engineering processes, ensuring knowledge transfer and ease of system maintenance. Participate in on-call rotations to address critical issues and ensure the reliability of data engineering systems. Provide mentorship and guidance to junior team members, fostering a collaborative and knowledge-sharing environment. Skills: airflow,data warehouses,cloud architecture,rdbms,spark,postgresql,real-time data streaming,microservices architecture,gcp,python,tekton,cloud,java,data architecture,terraform,management,rest apis,git,data,docker,github actions,sql,google bigquery,atlassian jira,sql server,workflow orchestration,scala,devops tools,github,apache kafka,mysql,gcp pub/sub Show more Show less

Posted 1 week ago

Apply

4.0 years

0 Lacs

Gurgaon, Haryana, India

Remote

Linkedin logo

June 4, 2025 easyrewardz About EasyRewardz EasyRewardz is a leading customer experience management company. It provides end-to-end customer engagement solutions to 100+ brands across 3500+ retail offline stores . EasyRewardz has a presence across all key retail verticals – Apparel, Fashion, Luxury, Food & Beverage, Travel and Entertainment, Wellness, and Banking . Key Capabilities Of EasyRewardz’s Proprietary Technology Platform Include Customer loyalty program as an end-to-end solution Platform for intelligent and meaningful engagement with brands’ customers Analytics engine to enable brands to engage in personalized conversations with consumers SaaS-based customer experience management solution to provide a unified view of the consumer at the multichannel level Why EasyRewardz? “Machine Learning”, “Personalization”, “Marketing Automation”, “Consumer Preferences” – these terms get real at EasyRewardz. If you’re looking for a career that allows you to innovate and think differently , EasyRewardz is the place! We are a fast-growing organization , and our journey has been fantastic — shaping young minds and driving retail excellence by influencing customer behavior. Learn more: https://www.easyrewardz.com Who are we seeking? Like-minded individuals with an entrepreneurial mindset , a passion to learn and excel . We value “Performance” and “Performers.” Job Title: Senior Data Engineer Location: Gurgaon Experience Required: 4+ years Department: Data Engineering & Analytics Reports To: ABC About The Role EasyRewardz is India’s leading customer engagement and loyalty platform. Our ecosystem includes: CRM Marketing automation AI-powered segmentation Campaign orchestration Analytics Omnichannel communication tools We’re looking for a Senior Data Engineer who can design, build, and optimize scalable data pipelines and platforms that power our product suite — including Zence Marketing, Zence 360, and loyalty systems. Key Responsibilities Architect, build, and maintain real-time and batch data pipelines using tools like Apache Spark, RisingWave, Redpanda, and ScyllaDB Collaborate with product managers, analysts, and developers to design systems that support business intelligence, behavioral analytics, and campaign automation Own and manage data ingestion from SDKs and third-party systems via webhooks and APIs Implement and maintain ETL/ELT pipelines across various customer touchpoints and engagement journeys Optimize queries and data storage using ScyllaDB, MySQL, and data lakes Ensure data quality, reliability, and governance through validation, monitoring, and alerting Work with DevOps to deploy scalable, fault-tolerant infrastructure in cloud or hybrid environments Mentor junior engineers and contribute to architecture and roadmap planning Must-Have Skills Strong experience with Apache Spark, Kafka/Redpanda, RisingWave, or similar stream processing tools Proficient in Python or Scala for pipeline scripting and data transformation Deep understanding of data modeling, distributed databases (ScyllaDB, Cassandra), and performance optimization Experience with both SQL and NoSQL systems (e.g., MySQL, ScyllaDB) Familiarity with event-driven architecture and large-scale customer event data Solid grasp of data quality frameworks, testing, lineage, and governance Experience with marketing automation or CRM platforms is a strong plus Good-to-Have Skills Working knowledge of n8n, Airflow, or other orchestration frameworks Understanding of SDK-based event capture and retry mechanisms What We Offer Opportunity to shape the data strategy of one of India’s top Martech platforms Collaborative and innovation-focused work environment Flexible work hours and remote-friendly setup Attractive compensation and clear growth path Apply at: talentacquisition@easyrewardz.com Apply for this position Full Name * Email * Phone * Cover Letter * Upload CV/Resume *Allowed Type(s): .pdf, .doc, .docx By using this form you agree with the storage and handling of your data by this website. * Share on Facebook Share on Twitter Share on LinkedIn Previous Next Show more Show less

Posted 1 week ago

Apply

4.0 - 8.0 years

9 - 18 Lacs

Hyderabad, Pune, Chennai

Work from Office

Naukri logo

Proficiency in Python and SQL for data processing and manipulation. Min 5 years of experience in data engineering, specifically working with Apache Airflow and AWS technologies. Strong knowledge of AWS services.

Posted 1 week ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

Remote

Linkedin logo

Who We Are Ontic makes software that corporate and government security professionals use to proactively manage threats, mitigate risks, and make businesses stronger. Built by security and software professionals, the Ontic Platform connects and unifies critical data, business processes, and collaborators in one place, consolidating security intelligence and operations. We call this Connected Intelligence. Ontic serves corporate security teams across key functions, including intelligence, investigations, GSOC, executive protection, and security operations. As Ontic employees, we put our mission first and value the trust bestowed upon us by our clients to help keep their people safe. We approach our clients and each other with empathy while focusing on the execution of our strategy. And we have fun doing it. Who We Are Ontic makes software that corporate and government security professionals use to proactively manage threats, mitigate risks, and make businesses stronger. Built by security and software professionals, the Ontic Platform connects and unifies critical data, business processes, and collaborators in one place, consolidating security intelligence and operations. We call this Connected Intelligence. Ontic serves corporate security teams across key functions, including intelligence, investigations, GSOC, executive protection, and security operations. Key Responsibilities Design, develop, and optimize machine learning models for various business applications. Build and maintain scalable AI feature pipelines for efficient data processing and model training. Develop robust data ingestion, transformation, and storage solutions for big data. Implement and optimize ML workflows, ensuring scalability and efficiency. Monitor and maintain deployed models, ensuring performance, reliability, and retraining when necessary. Qualifications And Experience Bachelor's or Master's degree in Computer Science, Artificial Intelligence, Data Science, or a related field. 4+ years of experience in machine learning, deep learning, or data science roles. Proficiency in Python and ML frameworks/tools such as PyTorch, Langchain Experience with data processing frameworks like Spark, Dask, Airflow and Dagster Hands-on experience with cloud platforms (AWS, GCP, Azure) and ML services. Experience with MLOps tools like ML flow, Kubeflow Familiarity with containerisation and orchestration tools like Docker and Kubernetes. Excellent problem-solving skills and ability to work in a fast-paced environment. Strong communication and collaboration skills. Ontic Benefits & Perks Competitive Salary Medical Benefits Internet Reimbursement Home Office Stipend Continued Education Stipend Festive & Achievement Celebrations Dynamic Office Environment Ontic is an equal opportunity employer. We are committed to a work environment that celebrates diversity. We do not discriminate against any individual based on race, color, sex, national origin, age, religion, marital status, sexual orientation, gender identity, gender expression, military or veteran status, disability, or any factors protected by applicable law. Ontic Benefits & Perks Competitive Salary Medical, Vision & Dental Benefits 401k Stock Options HSA Contribution Learning Stipend Flexible PTO Policy Quarterly company ME (mental escape) days Generous Parental Leave policy Home Office Stipend Mobile Phone Reimbursement Home Internet Reimbursement for Remote Employees Anniversary & Milestone Celebrations Ontic is an equal-opportunity employer. We are committed to a work environment that celebrates diversity. We do not discriminate against any individual based on race, color, sex, national origin, age, religion, marital status, sexual orientation, gender identity, gender expression, military or veteran status, disability, or any factors protected by applicable law. All Ontic employees are expected to understand and adhere to all Ontic Security and Privacy related policies in order to protect Ontic data and our clients data. Show more Show less

Posted 1 week ago

Apply

Exploring Airflow Jobs in India

The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Hyderabad
  4. Pune
  5. Gurgaon

Average Salary Range

The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum

Career Path

In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead

Related Skills

In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing

Interview Questions

  • What is Apache Airflow? (basic)
  • Explain the key components of Airflow. (basic)
  • How do you schedule a DAG in Airflow? (basic)
  • What are the different operators in Airflow? (medium)
  • How do you monitor and troubleshoot DAGs in Airflow? (medium)
  • What is the difference between Airflow and other workflow management tools? (medium)
  • Explain the concept of XCom in Airflow. (medium)
  • How do you handle dependencies between tasks in Airflow? (medium)
  • What are the different types of sensors in Airflow? (medium)
  • What is a Celery Executor in Airflow? (advanced)
  • How do you scale Airflow for a high volume of tasks? (advanced)
  • Explain the concept of SubDAGs in Airflow. (advanced)
  • How do you handle task failures in Airflow? (advanced)
  • What is the purpose of a TriggerDagRun operator in Airflow? (advanced)
  • How do you secure Airflow connections and variables? (advanced)
  • Explain how to create a custom Airflow operator. (advanced)
  • How do you optimize the performance of Airflow DAGs? (advanced)
  • What are the best practices for version controlling Airflow DAGs? (advanced)
  • Describe a complex data pipeline you have built using Airflow. (advanced)
  • How do you handle backfilling in Airflow? (advanced)
  • Explain the concept of DAG serialization in Airflow. (advanced)
  • What are some common pitfalls to avoid when working with Airflow? (advanced)
  • How do you integrate Airflow with external systems or tools? (advanced)
  • Describe a challenging problem you faced while working with Airflow and how you resolved it. (advanced)

Closing Remark

As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies