Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
130.0 years
2 - 7 Lacs
Hyderābād
On-site
Job Description Manager- Site Reliability Engineer (SRE) – Reliability & Automation The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centres focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centres are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Centre helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centres. Role Overview : We are looking for a dedicated Site Reliability Engineer (SRE) to ensure the reliability, scalability and operational excellence of our data applications hosted on AWS and traditional datacentres. You will own release management, automate infrastructure and deployment processes using Python, implement observability solutions and enforce compliance standards to maintain robust and highly available systems. What will you do in this role: Reliability Engineering: Design, implement and maintain systems that ensure high availability, fault tolerance and scalability of data applications across cloud (AWS) and on-premises environments. Release & Deployment Management: Manage and automate release pipelines, coordinate deployments and ensure smooth rollouts with minimal downtime. DevOps Automation: Develop and maintain automation scripts and tools (primarily in Python) to streamline infrastructure provisioning, configuration management and operational workflows. Observability & Monitoring: Build and enhance monitoring, logging and alerting systems to proactively detect and resolve system issues, ensuring optimal performance and uptime. Ensure reliability and scalability of ETL pipelines, including orchestration, scheduling, and dependency management. Automate deployment, rollback, and version control of ETL jobs and workflows. Implement monitoring and alerting for ETL job success, latency, and data quality metrics. Collaborate with data engineering teams to troubleshoot ETL failures and optimize pipeline performance. Maintain documentation and compliance related to ETL data lineage and processing Incident Management & Root Cause Analysis: Lead incident response efforts, perform thorough root cause analysis and implement preventive measures to avoid recurrence. Compliance & Security: Ensure systems comply with organizational policies and regulatory requirements, including data governance, security best practices and audit readiness. Collaboration: Work closely with development, data engineering and operations teams to align reliability goals with business objectives and support continuous improvement. Documentation & Knowledge Sharing: Maintain clear documentation of infrastructure, processes and incident reports to facilitate team knowledge and operational consistency. Monitoring & Troubleshooting: Implement and maintain monitoring and alerting for database health, query performance, and resource utilization; lead troubleshooting and root cause analysis for database-related incidents. What should you have: Bachelor’s degree in computer science, Engineering, Information Technology, or related field. 4+ years of experience in Site Reliability Engineering, DevOps or related roles focused on infrastructure reliability and automation. Strong proficiency in Python for automation and scripting. Experience with ETL orchestration tools such as Apache Airflow, AWS Glue, or similar. Understanding of data pipeline architectures and common failure modes. Familiarity with data quality and lineage concepts in ETL processes. Hands-on experience with AWS cloud services (IAM, S3, Lambda, CloudWatch, AirFlow, etc.) and traditional datacenter environments. Expertise in release management and CI/CD pipelines using tools such as Jenkins, GitLab CI, or similar. Deep knowledge of observability tools and frameworks (e.g., Prometheus, Grafana, ELK stack, Datadog). Solid understanding of infrastructure as code (IaC) tools like Terraform, CloudFormation or Ansible. Experience with container orchestration platforms (e.g., Kubernetes, Docker Swarm) is a plus. Strong incident management skills with focus on root cause analysis and remediation. Familiarity with compliance frameworks and security best practices in cloud and on-prem environments. Excellent communication skills to collaborate effectively across technical and non-technical teams. Preferred Qualifications Advanced degree in a relevant technical field. Certifications such as AWS Certified DevOps Engineer, ITIL V3/4 or similar. Experience working in Agile Scrum or Kanban environments. Knowledge of security and database administration in cloud and hybrid environments. Why Join Us? Play a critical role in ensuring the reliability and scalability of mission-critical data applications. Work with cutting-edge cloud and on-premises technologies. Collaborate with a passionate team focused on operational excellence. Opportunities for professional growth and continuous learning. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who we are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYD IT 2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Availability Management, Capacity Management, Change Controls, Configuration Management (CM), Design Applications, Incident Management, Information Technology (IT) Infrastructure, IT Service Management (ITSM), Software Configurations, Software Development, Software Development Life Cycle (SDLC), Solution Architecture, System Administration, System Designs Preferred Skills: Job Posting End Date: 09/15/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R359245
Posted 9 hours ago
0 years
3 - 9 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Consultant Specialist In this role, you will: Understand the business requirements and implement the same by writing appropriate business logic. Good hands-on experience on writing the Test cases using Junit or Cucumber. Responsible to design and develop the micro services using spring Boot, MongoDB, PostgreSQL etc. Responsible for developing Restful Web-service Architect and implement scalable microservices for critical banking applications. Research, evaluate and synthesize technical information to design, develop and test computer-based systems Ensuring project delivery aligns with client expectations. Integrate Kubernetes and Docker for application deployment and management. Deploy and manage applications on Google Kubernetes Engine (GKE). Enhance application performance and implement robust security measures. Mentor junior developers and foster a culture of continuous improvement. Good knowledge on Devops CI/CD pipelines using Jenkins and Ansible. Implemented robust API security using Spring Security and JWT tokens. Collaborate with cross-functional teams to deliver the innovative solutions. Analysis of functional and non-functional requirements Deliver the POCs to showcase application features to stakeholders Good knowledge on debugging the issues. Requirements To be successful in this role, you should meet the following requirements: Programming: Core Java, Spring Boot, Spring Cloud, Spring Security, Microservices. Databases: MongoDB, PostgreSQL, Cloud/DevOps: Any cloud AWS/GCP good to have, Docker, Kubernetes, Airflow, Git, Jenkins, Ansible Tools: Maven, Postman, Swagger, SonarQube,AppDynamics,Splunk,Jira,ServiceNow You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 9 hours ago
7.0 years
5 - 9 Lacs
Chennai
On-site
Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description: Job Description Looking for an experienced GCP Cloud/DevOps Engineer and or OpenShift to design, implement, and manage cloud infrastructure and services across multiple environments. This role requires deep expertise in Google Cloud Platform (GCP) services, DevOps practices, and Infrastructure as Code (IaC). Candidate will be deploying, automating, and maintaining high-availability systems, and implementing best practices for cloud architecture, security, and DevOps pipelines. REQUIREMENTS Bachelor's or master's degree in computer science, Information Technology, or a similar field Must have 7 + years of extensive experience in designing, implementing, and maintaining applications on GCP and OpenShift Comprehensive expertise in GCP services such as GKE, Cloudrun, Functions, Cloud SQL, Firestore, Firebase, Apigee, GCP App Engine, Gemini Code Assist, Vertex AI, Spanner, Memorystore, Service Mesh, and Cloud Monitoring Solid understanding of cloud security best practices and experience in implementing security controls in GCP Thorough understanding of cloud architecture principles and best practices Experience with automation and configuration management tools like Terraform and a sound understanding of DevOps principles Proven leadership skills and the ability to mentor and guide a technical team Key Responsibilities: Cloud Infrastructure Design and Deployment: Architect, design, and implement scalable, reliable, and secure solutions on GCP. Deploy and manage GCP services in both development and production environments, ensuring seamless integration with existing infrastructure. Implement and manage core services such as BigQuery, Datafusion, Cloud Composer (Airflow), Cloud Storage, Data Fusion, Compute Engine, App Engine, Cloud Functions and more. Infrastructure as Code (IaC) and Automation Develop and maintain infrastructure as code using Terraform or CLI scripts to automate provisioning and configuration of GCP resources. Establish and document best practices for IaC to ensure consistent and efficient deployments across environments. DevOps and CI/CD Pipeline Development: Create and manage DevOps pipelines for automated build, test, and release management, integrating with tools such as Jenkins, GitLab CI/CD, or equivalent. Work with development and operations teams to optimize deployment workflows, manage application dependencies, and improve delivery speed. Security and IAM Management: Handle user and service account management in Google Cloud IAM. Set up and manage Secrets Manager and Cloud Key Management for secure storage of credentials and sensitive information. Implement network and data security best practices to ensure compliance and security of cloud resources. Performance Monitoring and Optimization: Monitoring & Security: Set up observability tools like Prometheus, Grafana, and integrate security tools (e.g., SonarQube, Trivy). Networking & Storage: Configure DNS, networking, and persistent storage solutions in Kubernetes. Set up monitoring and logging (e.g., Cloud Monitoring, Cloud Logging, Error Reporting) to ensure systems perform optimally. Troubleshoot and resolve issues related to cloud services and infrastructure as they arise. Workflow Orchestration: Orchestrate complex workflows using Argo Workflow Engine. Containerization: Work extensively with Docker for containerization and image management. Optimization: Troubleshoot and optimize containerized applications for performance and security. Technical Skills: Expertise with GCP and OCP (OpenShift) services, including but not limited to Compute Engine, Kubernetes Engine (GKE), BigQuery, Cloud Storage, Pub/Sub, Datafusion, Airflow, Cloud Functions, and Cloud SQL. Proficiency in scripting languages like Python, Bash, or PowerShell for automation. Familiarity with DevOps tools and CI/CD processes (e.g. GitLab CI, Cloud Build, Azure DevOps, Jenkins) Employee Type: Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.
Posted 9 hours ago
3.0 years
4 - 18 Lacs
Chennai
On-site
Job Title: Data Engineer Experience Required: 3 to 7 years Location: US Job Type: Full-time Job Description: We are looking for a skilled Data Engineer with strong experience in Snowflake , Azure , GCP , PySpark , and DBT to join our growing data team. The ideal candidate will be responsible for building and optimizing scalable data pipelines and data warehouse solutions to support business intelligence, analytics, and reporting needs. Key Responsibilities: Design, build, and maintain ETL/ELT pipelines using PySpark and DBT for efficient data ingestion, transformation, and processing. Develop and optimize data warehouse structures and pipelines in Snowflake for performance and cost-efficiency. Integrate data from diverse sources including APIs, files, and cloud platforms, particularly Azure (e.g., Azure Blob Storage, Data Factory, Azure Functions) and GCP (e.g., GCS, Cloud Functions, Dataflow). Implement and manage data orchestration and workflow automation using DBT and scheduling tools (e.g., Apache Airflow, Azure Data Factory, GCP Composer). Ensure data quality, consistency, and lineage through validation and monitoring processes. Collaborate with data analysts, data scientists, and business stakeholders to understand requirements and deliver actionable data solutions. Manage data governance and security policies, including role-based access, data masking, and tagging in Snowflake. Troubleshoot and resolve data-related issues and optimize performance for data pipelines and workflows. Required Skills & Qualifications: 3+ years of experience in Data Engineering . Strong hands-on experience with Snowflake : warehouse management, performance tuning, RBAC, data sharing, and cost optimization. Experience with cloud platforms such as Microsoft Azure (e.g., Azure Blob Storage, Azure Functions, Azure Data Factory) and Google Cloud Platform (e.g., GCS, Dataflow, BigQuery). Proficient in PySpark for distributed data processing. Strong working knowledge of DBT (Data Build Tool) for data modeling and transformation. Proficiency in SQL and scripting languages like Python . Familiarity with CI/CD pipelines and version control systems (e.g., Git) . Solid understanding of data warehousing principles , data lakes , and big data processing best practices . Preferred Qualifications: Experience with real-time data streaming tools such as Kafka or Google Pub/Sub . Familiarity with data cataloging tools (e.g., Azure Purview, Google Data Catalog, Amundsen). Exposure to data visualization tools like Power BI , Tableau , or Looker . Snowflake certification is a plus. Strong communication skills and the ability to contribute effectively to team discussions and project planning. Job Types: Full-time, Permanent Pay: ₹473,019.95 - ₹1,824,238.69 per year Benefits: Cell phone reimbursement Food provided Health insurance Paid sick time Paid time off Provident Fund Schedule: Day shift Fixed shift Monday to Friday Work Location: In person
Posted 9 hours ago
3.0 - 5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Job Description: Site Project Engineer (HVAC & Cleanroom Projects) Company: Syntec Airflow Systems Location: Project Sites in and around Gurugram, Haryana Job Type: Full-Time About Syntec Airflow Systems: Syntec Airflow Systems is a leading provider of advanced HVAC and turnkey cleanroom solutions. We specialize in designing, manufacturing, and installing precision-engineered environments for critical industries, including pharmaceuticals, biotechnology, microelectronics, and healthcare. Our commitment to innovation, quality, and client satisfaction has made us a trusted partner in delivering state-of-the-art controlled environments. Job Summary: We are seeking a dynamic, detail-oriented, and experienced Site Project Engineer to join our project execution team. The successful candidate will be responsible for the day-to-day management and supervision of on-site activities for our HVAC and cleanroom installation projects. You will be the crucial link between the project management team, engineering design, and on-site execution, ensuring projects are completed safely, on time, within budget, and to the highest industry standards (ISO, GMP, etc.). Key Responsibilities: On-Site Project Execution: Supervise and manage all on-site installation activities related to HVAC systems (e.g., Air Handling Units (AHUs), ducting, chillers, piping) and Cleanroom components (e.g., modular panels, ceiling grids, HEPA filters, pass boxes). Read, interpret, and ensure adherence to engineering drawings, specifications, and project documents (P&IDs, GA drawings, schematics). Plan and coordinate daily and weekly site activities in line with the overall project schedule. Team and Subcontractor Management: Lead and manage site teams, including technicians, supervisors, and skilled labor. Oversee the work of subcontractors to ensure quality, compliance, and adherence to timelines. Coordinate with other service trades (electrical, plumbing, civil) to ensure seamless integration and avoid conflicts. Quality and Safety Compliance: Implement and enforce stringent quality control procedures for all installation works. Ensure all on-site activities comply with company and statutory Health, Safety, and Environment (HSE) policies. Conduct regular site inspections and quality audits to identify and rectify any issues promptly. Ensure work is performed in accordance with cGMP (Current Good Manufacturing Practices) and ISO 14644 cleanroom standards where applicable. Reporting and Documentation: Prepare and submit daily and weekly site progress reports to the Project Manager. Maintain accurate site records, including material logs, labor attendance, site instructions, and "as-built" drawings. Assist in the preparation of project closeout documentation. Testing & Commissioning Support: Assist the commissioning team during the Testing, Adjusting, and Balancing (TAB) of HVAC systems. Support the validation team during cleanroom performance testing and certification, including measuring key parameters like air changes per hour (ACPH) and pressure differentials (ΔP). Troubleshoot and resolve technical issues that arise during installation and commissioning phases. Required Qualifications and Skills: Education: Bachelor's Degree in Mechanical Engineering is required. Experience: Minimum of 3-5 years of hands-on experience in the on-site execution and installation of industrial HVAC and/or cleanroom projects. Technical Knowledge: Strong technical understanding of HVAC principles, systems, and equipment. In-depth knowledge of cleanroom construction methodologies and standards (ISO 14644). Proficient in reading and interpreting MEP (Mechanical, Electrical, Plumbing) drawings and technical documents. Skills: Excellent project execution and site management skills. Strong leadership and team supervision capabilities. Exceptional problem-solving and decision-making abilities. Proficient in MS Office Suite (Word, Excel, Outlook). Working knowledge of AutoCAD is highly desirable. Excellent verbal and written communication skills in English and Hindi.
Posted 10 hours ago
0 years
2 - 5 Lacs
Chennai
On-site
Hadoop Admin Location: Bangalore / Pune/ Chennai Experience - 7-10 yrs About Us “Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Hadoop Admin Location: Bangalore / Pune/ Chennai Experience - 7-10 yrs JOB SUMMARY: Strong expertise in Install, configure, and maintain Hadoop ecosystem components (HDFS, YARN, Hive, HBase, Spark, Oozie, Zookeeper, etc.). Monitor cluster performance and capacity; troubleshoot and resolve issues proactively. Manage cluster upgrades, patching, and security updates with minimal downtime. Implement and maintain data security, authorization, and authentication (Kerberos, Ranger, or Sentry). Configure and manage Hadoop high availability, disaster recovery, and backup strategies. Automate cluster monitoring, alerting, and performance tuning. Work closely with data engineering teams to ensure smooth data pipeline operations. Perform root cause analysis for recurring system issues and implement permanent fixes. Develop and maintain system documentation, including runbooks and SOPs. Support integration with third-party tools (Sqoop, Flume, Kafka, Airflow, etc.). Participate in on-call rotation and incident management for production support.
Posted 10 hours ago
2.0 - 6.0 years
1 - 3 Lacs
India
On-site
Job Title: HVAC Engineer Company: MK Clean Room Project Pvt. Ltd. Location: Vadodara Department: Engineering / Projects Reports To: Project Manager / Technical Head Experience Required: 2–6 years (or as applicable) Job Type: Full-time Job Summary: As an HVAC Engineer at MK Clean Room Project Pvt. Ltd., you will be responsible for the design, installation, commissioning, and maintenance of Heating, Ventilation, and Air Conditioning systems tailored to cleanroom and controlled environment applications. You will work closely with the project, design, and execution teams to ensure HVAC systems meet industry and company standards for quality, safety, and performance. Key Responsibilities: Design and plan HVAC systems for cleanroom projects in compliance with ISO and GMP standards. Prepare technical drawings, load calculations, and duct layouts using AutoCAD or Revit. Conduct site inspections, surveys, and feasibility studies. Supervise HVAC installation, testing, and commissioning on-site. Collaborate with procurement for HVAC equipment and material selection. Ensure HVAC systems maintain required temperature, humidity, airflow, and pressure differentials. Coordinate with project teams, electrical/plumbing engineers, and clients. Conduct troubleshooting and provide technical support during and after project execution. Maintain proper documentation including test reports, drawings, manuals, and BOQs. Adhere to all company safety protocols and industry regulations. Key Skills & Qualifications: Bachelor’s Degree / Diploma in Mechanical Engineering or equivalent. Proven experience in HVAC systems, preferably in cleanroom or pharmaceutical industries. Strong knowledge of cleanroom classification, air balancing, filtration systems, and BMS integration. Proficiency in AutoCAD, MS Office, and HVAC calculation tools. Good communication and team coordination skills. Understanding of industry codes and standards (ASHRAE, ISHRAE, ISO 14644, etc.). Preferred Qualifications: Certification in HVAC design or cleanroom engineering. Experience with AHUs, FFUs, VRV/VRF, Chillers, and Ductable systems. Why Join MK Clean Room Project Pvt. Ltd.? Work on cutting-edge cleanroom projects in pharma, biotech, and electronics. Growth-oriented environment with training and upskilling opportunities. Be a part of a trusted brand delivering high-quality cleanroom solutions across India Job Types: Full-time, Permanent Pay: ₹15,000.00 - ₹30,000.00 per month Schedule: Day shift Supplemental Pay: Yearly bonus Experience: Hvac engineer: 1 year (Required) Willingness to travel: 75% (Required) Work Location: In person Expected Start Date: 08/10/2025
Posted 10 hours ago
5.0 years
2 - 5 Lacs
Noida
On-site
Expertise in Model Development: Develop and fine-tune machine learning models. Ensure models are accurate, efficient, and tailored to our specific needs. 2. Quality Assurance: Rigorously evaluate models to identify and rectify errors. Maintain the integrity of our data-driven decisions through high performance and reliability. 3. Efficiency and Scalability: Streamline processes to reduce time-to-market. Scale AI initiatives and ML engineering skills effectively with dedicated model training and testing. 4. Production ML Monitoring & MLOps: Implement and maintain model monitoring pipelines to detect data drift, concept drift, and model performance degradation. Set up alerting and logging systems using tools such as Evidently AI, WhyLabs/Prometheus + Grafana or cloud-native solutions (AWS SageMaker Monitor, GCP Vertex AI, Azure Monitor ) . Collaborate with teams to integrate monitoring into CI/CD pipelines, using platforms like Kubeflow, MLflow, Airflow, and Neptune.ai. Define and manage automated retraining triggers and model versioning strategies. Ensure observability and traceability across the ML lifecycle in production environments. Qualifications Qualifications: 5+ years of experience in the respective field. Proven experience in developing and fine-tuning machine learning models. Strong background in quality assurance and model testing. Ability to streamline processes and scale AI initiatives. Innovative mindset with a keen understanding of industry trends. License/Certification/Registration Job Location
Posted 10 hours ago
6.0 - 9.0 years
0 Lacs
Andhra Pradesh
On-site
Data Lead Engineer Exp : 6 - 9 years 1. Design, develop, and maintain efficient and reliable data pipelines using Java, Scala, Apache Spark and Confluent Cloud (Kafka, KStreams, kSQLDB, Schema Registry) 2. Leverage Apache Spark (Java/Scala) for large-scale data processing and transformation. 3. Experience with building, maintaining and debugging applications and data pipelines using Confluent Cloud (Kafka, KStreams, kSQLDB, Schema Registry). 4. Build and optimize data storage solutions using NoSQL databases such as ScyllaDB and/or Cassandra. 5. Experienced with AWS services required for Data Engineering such as EMR, ServerlessEMR, AWS Glue, CodeCommit, EC2, S3 etc. 6. Familiarity with workflow orchestration tools such as Airflow 7. Experience with building and deploying applications using Docker or AWS ECS or AWS EKS 8. Well versed with code management using tools like GitHub and CI/CD pipelines and deployment of data pipelines on AWS cloud. 9. Implement and manage search and analytics capabilities using AWS OpenSearch and/or Elasticsearch. 10. Collaborate with data scientists, analysts, and other engineers to understand data requirements and deliver effective solutions. 11. Monitor and troubleshoot data pipelines to ensure data quality and performance. 12. Implement data governance and data quality best practices. 13. Automate data ingestion, processing, and deployment processes. 14. Stay up-to-date with the latest data engineering trends and technologies. 15. Contribute to the design and architecture of our data platform on AWS. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 10 hours ago
56.0 years
0 Lacs
Hyderabad, Telangana, India
Remote
Join our transformation within the RMG Data Engineering team in Hyderabad and you will have the opportunity to work with a collaborative and dynamic network of technologists. Our teams play a pivotal role in implementing data products, creating impactful visualizations, and delivering seamless data solutions to downstream systems. At Macquarie, our advantage is bringing together diverse people and empowering them to shape all kinds of possibilities. We are a global financial services group operating in 31 markets and with 56 years of unbroken profitability. You’ll be part of a friendly and supportive team where everyone - no matter what role - contributes ideas and drives outcomes. What role will you play? In this role, you will apply your expertise in big data technologies and DevOps practices to design, develop, deploy, and support data assets throughout their lifecycle. You’ll establish templates, methods, and standards while managing deadlines, solving technical challenges, and improving processes. A growth mindset, passion for learning, and adaptability to innovative technologies will be essential to your success. What You Offer Hands-on experience building, implementing, and enhancing enterprise-scale data platforms. Proficiency in big data with expertise in Spark, Python, Hive, SQL, Presto, storage formats like Parquet, and orchestration tools such as Apache Airflow. Knowledgeable in cloud environments (preferably AWS), with an understanding of EC2, S3, Linux, Docker, and Kubernetes. ETL Tools: Proficient in Talend, Apache Airflow, DBT, and Informatica, AWS Glue. Data Warehousing: Experience with Amazon Redshift and Ateina. Kafka Development Engineering: Experience with developing and managing streaming data pipelines using Apache Kafka. We love hearing from anyone inspired to build a better future with us, if you're excited about the role or working at Macquarie we encourage you to apply. What We Offer Benefits At Macquarie, you’re empowered to shape a career that’s rewarding in all the ways that matter most to you. Macquarie employees can access a wide range of benefits which, depending on eligibility criteria, include: 1 wellbeing leave day per year 26 weeks’ paid maternity leave or 20 weeks’ paid parental leave for primary caregivers along with 12 days of paid transition leave upon return to work and 6 weeks’ paid leave for secondary caregivers Company-subsidised childcare services 2 days of paid volunteer leave and donation matching Benefits to support your physical, mental and financial wellbeing including comprehensive medical and life insurance cover, the option to join parental medical insurance plan and virtual medical consultations extended to family members Access to our Employee Assistance Program, a robust behavioural health network with counselling and coaching services Access to a wide range of learning and development opportunities, including reimbursement for professional membership or subscription Hybrid and flexible working arrangements, dependent on role Reimbursement for work from home equipment About Technology Technology enables every aspect of Macquarie, for our people, our customers and our communities. We’re a global team that is passionate about accelerating the digital enterprise, connecting people and data, building platforms and applications and designing tomorrow’s technology solutions. Our commitment to diversity, equity and inclusion We are committed to fostering a diverse, equitable and inclusive workplace. We encourage people from all backgrounds to apply and welcome all identities, including race, ethnicity, cultural identity, nationality, gender (including gender identity or expression), age, sexual orientation, marital or partnership status, parental, caregiving or family status, neurodiversity, religion or belief, disability, or socio-economic background. We welcome further discussions on how you can feel included and belong at Macquarie as you progress through our recruitment process. Our aim is to provide reasonable adjustments to individuals who may need support during the recruitment process and through working arrangements. If you require additional assistance, please let us know in the application process.
Posted 10 hours ago
6.0 years
0 Lacs
India
On-site
About YipitData: YipitData is the leading market research and analytics firm for the disruptive economy and recently raised up to $475M from The Carlyle Group at a valuation over $1B. We analyze billions of alternative data points every day to provide accurate, detailed insights on ridesharing, e-commerce marketplaces, payments, and more. Our on-demand insights team uses proprietary technology to identify, license, clean, and analyze the data many of the world’s largest investment funds and corporations depend on. For three years and counting, we have been recognized as one of Inc’s Best Workplaces . We are a fast-growing technology company backed by The Carlyle Group and Norwest Venture Partners. Our offices are located in NYC, Austin, Miami, Denver, Mountain View, Seattle , Hong Kong, Shanghai, Beijing, Guangzhou, and Singapore. We cultivate a people-centric culture focused on mastery, ownership, and transparency. Why You Should Apply NOW: You’ll be working with many strategic engineering leaders within the company. You’ll report directly to the Director of Data Engineering. You will help build out our Data Engineering team presence in India. You will work with a Global team. You’ll be challenged with a lot of big data problems. About The Role: We are seeking a highly skilled Senior Data Engineer to join our dynamic Data Engineering team. The ideal candidate possesses 6-8 years of data engineering experience. An excellent candidate should have a solid understanding of Spark and SQL, and have data pipeline experience. Hired individuals will play a crucial role in helping to build out our data engineering team to support our strategic pipelines and optimize for reliability, efficiency, and performance. Additionally, Data Engineering serves as the gold standard for all other YipitData analyst teams, building and maintaining the core pipelines and tooling that power our products. This high-impact, high-visibility team is instrumental to the success of our rapidly growing business. This is a unique opportunity to be the first hire in this team, with the potential to build and lead the team as their responsibilities expand. This is a hybrid opportunity based in India. During training and onboarding, we will expect several hours of overlap with US working hours. Afterward, standard IST working hours are permitted with the exception of 1-2 days per week, when you will join meetings with the US team. As Our Senior Data Engineer You Will: Report directly to the Senior Manager of Data Engineering, who will provide significant, hands-on training on cutting-edge data tools and techniques. Build and maintain end-to-end data pipelines. Help with setting best practices for our data modeling and pipeline builds. Create documentation, architecture diagrams, and other training materials. Become an expert at solving complex data pipeline issues using PySpark and SQL. Collaborate with stakeholders to incorporate business logic into our central pipelines. Deeply learn Databricks, Spark, and other ETL toolings developed internally. You Are Likely To Succeed If: You hold a Bachelor’s or Master’s degree in Computer Science, STEM, or a related technical discipline. You have 6+ years of experience as a Data Engineer or in other technical functions. You are excited about solving data challenges and learning new skills. You have a great understanding of working with data or building data pipelines. You are comfortable working with large-scale datasets using PySpark, Delta, and Databricks. You understand business needs and the rationale behind data transformations to ensure alignment with organizational goals and data strategy. You are eager to constantly learn new technologies. You are a self-starter who enjoys working collaboratively with stakeholders. You have exceptional verbal and written communication skills. Nice to have: Experience with Airflow, dbt, Snowflake, or equivalent. What We Offer: Our compensation package includes comprehensive benefits, perks, and a competitive salary: We care about your personal life and we mean it. We offer vacation time, parental leave, team events, learning reimbursement, and more! Your growth at YipitData is determined by the impact that you are making, not by tenure, unnecessary facetime, or office politics. Everyone at YipitData is empowered to learn, self-improve, and master their skills in an environment focused on ownership, respect, and trust. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, marital status, disability, gender, gender identity or expression, or veteran status. We are proud to be an equal-opportunity employer. Job Applicant Privacy Notice
Posted 10 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Experience :7-9 yrs Experience in AWS services must like S3, Lambda , Airflow, Glue, Athena, Lake formation ,Step functions etc. Experience in programming in JAVA and Python. Experience performing data analysis (NOT DATA SCIENCE) on AWS platforms Nice To Have Experience in a Big Data technologies (Terradata, Snowflake, Spark, Redshift, Kafka, etc.) Experience with data management process on AWS is a huge Plus Experience in implementing complex ETL transformations on AWS using Glue. Familiarity with relational database environment (Oracle, Teradata, etc.) leveraging databases, tables/views, stored procedures, agent jobs, etc.
Posted 10 hours ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Immediate opening for Data engineer @ Pune/Chennai location!!! EXP: 5+ YRS CTC: ECTC: NP: Immediate to 10 Days (Currently serving notice period) Work Location: Chennai (Taramani)/Pune (Hinjewadi) Work mode: Work from office Shift: UK Shift JD: Data engineer, python, pyspark, airflow, hive, sql, trino If interested candidates kindly share me your resume srinivasan.jayaraman@servion.com
Posted 12 hours ago
8.0 - 12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: Experience: 8 to 12years Location: Hyderabad Immediate to 15 days preferred Must Have experience in " Python & Apache Airflow" We are seeking an Apache Airflow Senior Python Tech Lead w ith strong expertise in Python and hands-on experience in Azure cloud technologies. The role will focus on migrating processes from the current 3rd Party RPA modules to Apache Airflow modules, ensuring seamless orchestration and automation of workflows. The ideal candidate will bring technical proficiency, problem-solving skills, and a deep understanding of workflow automation, along with a strong grasp of the North America insurance industry processes. Technical Skills: ∙Design, develop, and implement workflows using Apache Airflow to replace the current 3rd Party RPA modules. ∙Build and optimize Python scripts to enable automation and integration with Apache Airflow pipelines. ∙Leverage Azure cloud services for deployment, monitoring, and scaling of Airflow workflows. ∙Collaborate with cross-functional teams to understand existing processes, dependencies, and business objectives. ∙Lead the migration of critical processes such as Auto, Package, Work Order Processing, and Policy Renewals within CI, Major Accounts, and Middle Market LOBs. ∙Ensure the accuracy, efficiency, and scalability of new workflows post-migration. ∙Perform unit testing, troubleshooting, and performance tuning for workflows and scripts. ∙Document workflows, configurations, and technical details to maintain clear and comprehensive project records. ∙Mentor junior developers and share best practices for Apache Airflow and Python development. Responsibilities ∙Strong expertise in Apache Airflow workflow orchestration. ∙Proficiency in Python programming for scripting, data transformation, and process automation. ∙Hands-on experience in Azure cloud technologies (e.g., Azure Data Factory, Azure DevOps, Azure Storage). ∙Proven experience in migrating and automating processes from legacy systems or RPA modules. ∙Strong analytical and problem-solving skills with attention to detail. ∙Excellent communication and documentation skills. Process Skills: ∙Experience working with Auto, Package, Work Order Processing, and Policy Renewals. ∙Familiarity with Commercial Insurance (CI), Major Accounts, and Middle Market LOBs in the North America insurance industry. ∙Understanding of RPA processes and architecture. Thanks & Regards Nandhini Ramesh Technical recruiter & Talent Aquisition, # nandhini@vysystems.com Contact: 8925894353
Posted 13 hours ago
7.0 years
0 Lacs
India
On-site
Company Description ThreatXIntel is a startup cyber security company focused on protecting businesses and organizations from cyber threats. We provide tailored, affordable solutions in cloud security, web and mobile security testing, cloud security assessment, and DevSecOps. Our proactive approach to security involves continuous monitoring and testing to identify vulnerabilities before they are exploited. Role Description We are seeking an experienced Snowflake Data Engineer/Architect to join our team on a contract basis. You will work closely with enterprise clients, delivering robust data engineering solutions, optimizing performance, and ensuring high-quality architecture on Snowflake. This is a client-facing role requiring excellent communication and technical depth. Key Responsibilities: Design, develop, and optimize Snowflake data warehouse solutions Work with DBT, Airflow, Fivetran, Python, and AWS in modern data stack environments Perform performance tuning and troubleshoot complex data issues Engage directly with clients to gather requirements and present solutions Collaborate with cross-functional teams on architecture and data pipelines Requirements: 7+ years of experience in data engineering 4+ years of hands-on experience in Snowflake projects Proficient in Snowflake architecture, best practices, and ecosystem tools Strong experience with Python and orchestration tools (Airflow, DBT) Excellent client communication and problem-solving skills Snowflake certification is a plus (we can sponsor if needed)
Posted 15 hours ago
7.0 years
0 Lacs
India
Remote
Company Description ThreatXIntel is a startup cyber security company focused on protecting businesses and organizations from cyber threats. We offer customized, affordable solutions, including cloud security, web and mobile security testing, and DevSecOps, to meet the specific needs of our clients. Our proactive approach ensures continuous monitoring and testing to identify vulnerabilities before they can be exploited. Role Description We’re hiring freelance Snowflake experts to support ongoing enterprise projects. Ideal candidates will be technically strong, independent, and able to contribute to performance tuning, implementation, and solutioning work in a modern cloud data stack. This is a client-facing, freelance role with flexible work-from-home arrangements. Responsibilities: Build and maintain Snowflake data warehouses Optimize queries and warehouse performance Develop and automate pipelines using DBT, Airflow, Fivetran, and Python Communicate directly with US-based clients and deliver high-quality solutions Work closely with internal stakeholders and cross-functional developers Skills & Experience Required: 7+ years of total experience in data engineering Minimum 4 years on Snowflake implementations Deep understanding of Snowflake optimization and ecosystem tools Experience with AWS data services and automation scripting Self-driven and capable of working independently in a remote setup Certification is a plus (can be sponsored)
Posted 15 hours ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Role Overview We are looking for a Senior Data Engineer who will play a key role in designing, building, and maintaining data ingestion frameworks and scalable data pipelines. The ideal candidate should have strong expertise in platform architecture, data modeling, and cloud-based data solutions to support real-time and batch processing needs. What you'll be doing: Design, develop, and optimise DBT models to support scalable data transformations Architect and implement modern ELT pipelines using DBT and orchestration tools like Apache Airflow and Prefect Lead performance tuning and query optimization for DBT models running on Snowflake, Redshift, or Databricks Integrate DBT workflows & pipelines with AWS services (S3, Lambda, Step Functions, RDS, Glue) and event-driven architectures Implement robust data ingestion processes from multiple sources, including manufacturing execution systems (MES), Manufacturing stations, and web applications Manage and monitor orchestration tools (Airflow, Prefect) for automated DBT model execution Implement CI/CD best practices for DBT, ensuring version control, automated testing, and deployment workflows Troubleshoot data pipeline issues and provide solutions for optimizing cost and performance What you'll have: 5+ years of hands-on experience with DBT, including model design, testing, and performance tuning 5+ years of Strong SQL expertise with experience in analytical query optimization and database performance tuning 5+ years of programming experience, especially in building custom DBT macros, scripts, APIs, working with AWS services using boto3 3+ years of Experience with orchestration tools like Apache Airflow, Prefect for scheduling DBT jobs Hands-on experience in modern cloud data platforms like Snowflake, Redshift, Databricks, or Big Query Experience with AWS data services (S3, Lambda, Step Functions, RDS, SQS, CloudWatch) Familiarity with serverless architectures and infrastructure as code (CloudFormation/Terraform) Ability to effectively communicate timelines and deliver MVPs set for the sprint Strong analytical and problem-solving skills, with the ability to work across cross-functional teams Nice to haves: Experience in hardware manufacturing data processing Contributions to open-source data engineering tools Knowledge of Tableau or other BI tools for data visualization Understanding of front-end development (React, JavaScript, or similar) to collaborate effectively with UI teams or build internal tools for data visualization
Posted 17 hours ago
7.0 years
0 Lacs
Greater Hyderabad Area
On-site
Role: Data Platform Engineer Location: Hyderabad (Hybrid) - 7+ years of experience - immediate joiner - For a Healthcare Data Analytics Client Hands-on skills required for this position: Proficient with Python Programming, Scala, SQL, and Redshift AWS ecosystem Infrastructure as a code using Terraform Experience leading and handling a team of Data Engineers Short Description: Technical Skills - Strong programming skills in Python or Scala. - Hands-on with SQL - Deep expertise in AWS cloud platforms and familiarity with a range of AWS Analytics Ecosystem components, including but not limited to Apache Airflow, Apache Spark, S3, Glue, Kafka, AWS Athena, Lambda, Redshift, Lake Formation, AWS Batch, ECS - Fargate, Kinesis, Flink, DynamoDB, and SageMaker. - Advanced proficiency with IaC tool Terraform. Apply with resume: Kriti.singh@sidinformation.com
Posted 17 hours ago
6.0 years
0 Lacs
Gurugram, Haryana, India
Remote
About The Role We’re looking for top-tier AI/ML Engineers with 6+ years of experience to join our fast-paced and innovative team. If you thrive at the intersection of GenAI, Machine Learning, MLOps, and application development, we want to hear from you. You’ll have the opportunity to work on high-impact GenAI applications and build scalable systems that solve real business problems. Key Responsibilities Design, develop, and deploy GenAI applications using techniques like RAG (Retrieval Augmented Generation), prompt engineering, model evaluation, and LLM integration. Architect and build production-grade Python applications using frameworks such as FastAPI or Flask. Implement gRPC services, event-driven systems (Kafka, PubSub), and CI/CD pipelines for scalable deployment. Collaborate with cross-functional teams to frame business problems as ML use-cases — regression, classification, ranking, forecasting, and anomaly detection. Own end-to-end ML pipeline development: data preprocessing, feature engineering, model training/inference, deployment, and monitoring. Work with tools such as Airflow, Dagster, SageMaker, and MLflow to operationalize and orchestrate pipelines. Ensure model evaluation, A/B testing, and hyperparameter tuning is done rigorously for production systems. Must-Have Skills Hands-on experience with GenAI/LLM-based applications – RAG, Evals, vector stores, embeddings. Strong backend engineering using Python, FastAPI/Flask, gRPC, and event-driven architectures. Experience with CI/CD, infrastructure, containerization, and cloud deployment (AWS, GCP, or Azure). Proficient in ML best practices: feature selection, hyperparameter tuning, A/B testing, model explainability. Proven experience in batch data pipelines and training/inference orchestration. Familiarity with tools like Airflow/Dagster, SageMaker, and data pipeline architecture.
Posted 19 hours ago
6.0 years
0 Lacs
Greater Kolkata Area
On-site
Job Summary We are seeking a highly experienced and results-driven Senior ETL Developer with over 6 years of professional experience in data integration, transformation, and analytics across enterprise-grade data platforms. This role requires deep expertise in ETL development, strong familiarity with cloud-based data solutions, and the ability to manage large-scale data operations. The candidate should be capable of working across complex data environments, including structured and unstructured datasets, and demonstrate fluency in handling both traditional and modern cloud data ecosystems. The ideal candidate must have strong hands-on experience with ETL tools, advanced SQL and Python scripting, big data processing, and cloud-based data services, particularly within the AWS ecosystem. This position will play a key role in the design, development, and optimization of scalable data pipelines and contribute to enterprise-level data engineering solutions, while supporting analytical and reporting needs in both Application Development (AD) and Application Maintenance Support (AMS) environments. Key Responsibilities Design, develop, and maintain efficient and scalable ETL pipelines using modern data tools and platforms, focusing on extraction, transformation, and loading of large datasets from multiple sources. Work closely with data architects, analysts, and other stakeholders to understand business data requirements and translate them into robust technical ETL solutions. Implement and optimize data loading, transformation, cleansing, and integration strategies to ensure high performance and quality in downstream applications. Develop and manage cloud-based data platforms, particularly within the AWS ecosystem, including services such as Amazon S3, EMR, MSK, and SageMaker. Collaborate with cross-functional teams to integrate data from various databases such as Snowflake, Oracle, Amazon RDS (Aurora, Postgres), DB2, SQL Server, and Cassandra. Employ scripting languages like SQL, PL/SQL, Python, and Unix shell commands to automate data transformations and monitoring processes. Leverage big data technologies such as Apache Spark and Sqoop to handle large-scale data workloads and enhance data processing capabilities. Support and contribute to data modeling initiatives using tools like Erwin and Oracle Data Modeler; exposure to Archimate will be considered an advantage. Work with scheduling and orchestration tools including Autosys, SFTP, and preferably Apache Airflow to manage ETL workflows efficiently. Troubleshoot and resolve data inconsistencies, data load failures, and performance issues across the data pipeline and cloud infrastructure. Follow best practices in data governance, metadata management, version control, and data quality frameworks to ensure compliance and consistency. Maintain documentation of ETL processes, data flows, and integration points for knowledge sharing and auditing purposes. Participate in code reviews, knowledge transfer sessions, and mentoring junior developers in ETL practices and cloud integrations. Stay up to date with evolving technologies and trends in data engineering, cloud services, and big data to proactively propose Technical Skills : ETL Tools : Experience with Talend is preferred (especially in AD and AMS functions), although it may be phased out in the Databases : Expertise in Snowflake, Oracle, Amazon RDS (Aurora, Postgres), DB2, SQL Server, and Cassandra. Big Data & Cloud : Hands-on with Apache Sqoop, AWS S3, Hue, AWS CLI, Amazon EMR, Amazon MSK, Amazon SageMaker, Apache Scripting : Strong skills in SQL, PL/SQL, Python; knowledge of Unix command-line is essential; R programming is optional but considered a Scheduling Tools : Working knowledge of Autosys, SFTP, and preferably Apache Airflow (training can be Data Modeling Tools : Proficiency in Erwin, Oracle Data Modeler; familiarity with Archimate is a preferred Notes : Power BI knowledge is relevant only in shared AD roles and not required for dedicated ETL and AWS roles or AMS responsibilities. The role requires strong communication skills to collaborate with technical and non-technical stakeholders, as well as a proactive mindset to identify and resolve data challenges. Must demonstrate the ability to adapt in fast-paced and changing environments while maintaining attention to detail and delivery quality. Exposure to enterprise data warehouse modernization, cloud migration projects, or real-time streaming data pipelines is considered highly advantageous. (ref:hirist.tech)
Posted 20 hours ago
11.0 years
0 Lacs
India
On-site
Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 11+years. Strong working experience in machine learning, with a proven track record of delivering impactful solutions in NLP, machine vision, and AI. Proficiency in programming languages such as Python or R, and experience with data manipulation libraries (e.g., Pandas, NumPy). Strong understanding of statistical concepts and techniques, and experience applying them to real-world problems. Strong working experience in AWS. Strong programming skills in Python, and proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX, as well as machine learning libraries such as scikit-learn. Familiarity with MLOps tools such as MLflow, Kubeflow, Airflow. Proficient experience with Generative AI frameworks such as GANs, VAEs, prompt engineering, and retrieval-augmented generation (RAG), and the ability to apply them to real-world problems. Hands-on skills in data engineering and building robust ML pipelines. Excellent communication skills and the ability to collaborate effectively with cross-functional teams. RESPONSIBILITIES: Writing and reviewing great quality code. Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the clients’ requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.
Posted 20 hours ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
Join the SASA! We are committed to creating a welcoming and supportive environment for all employees. Our goal is to foster a culture of respect, open communication, and collaboration. In our dynamic and challenging work environment, you will have the opportunity to engage in cutting-edge projects for a diverse clientele. As a part of our team of experts, you will work with individuals who are dedicated to excellence, innovation, and collaboration. We value employees who take ownership of their work, pursue continuous learning and development, and strive for both professional and personal growth. If you are seeking a challenging and rewarding career in software development and consulting, we invite you to explore our current job openings and apply today. We are excited to hear from you! Current Openings: - Senior PHP Developer - JAVA Springboot Developer - React Native App Developer - Data Science & Architect - Urgent* PHP Developer (Symfony, Laravel, React JS, Angular JS) - Team Leader (PHP & JS Framework) - Business Development Executive - Web Designer - Mobile Apps Developers (Flutter & React Native) - AR & VR Developer - Data Science & Architect Experience: Minimum 3 years Key skills/experience required: - Hands-on experience in technologies such as Python, R Lang, Tableau, SQL, Apache Hadoop, AWS, Microsoft Azure, Google Cloud Platform (GCP), Apache Kafka, & Airflow - Knowledge of SaaS apps architecture - Proficiency with Git, JIRA, CI/CD - Fluent in English for effective communication and participation in client meetings,
Posted 20 hours ago
34.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Skills: Python, PyTorch, TensorFlow, Time Series Analysis, Predictive Modeling, SQL, Location: Onsite Bangalore, Karnataka Company: VectorStack Open Positions: 4 (2 Mid-Level with 34 years of experience, 2 Junior-Level with 12 years of experience) Employment Type: Full-time Joining: Immediate (Urgent Requirement) About VectorStack VectorStack is a technology solutions provider that drives digital transformation and enhances business performance. With domain expertise across Tech Advancement, Design Innovation, Product Evolution, and Business Transformation, we deliver tailored strategies that yield measurable results. We serve sectors like Retail Tech, Ad Tech, FinTech, and EdTech, helping businesses unlock their full potential. Learn more at www.vectorstack.co. Role Overview We are urgently seeking dynamic and skilled AI/ML Engineers to join our growing team in Bangalore. You will work on a variety of AI and machine learning projects, ranging from building intelligent systems to deploying scalable models in real-time applications. Responsibilities Develop, train, and deploy machine learning models for real-world applications Handle data preprocessing, feature engineering, and model optimisation Collaborate with cross-functional teams (Data Science, Engineering, Product) Maintain and monitor model performance in production environments Document model workflows, experiments, and outcomes clearly Requirements Mid-Level AI/ML Engineer (34 years experience): Strong proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, XGBoost) Experience deploying ML models via APIs (Flask, FastAPI) Working knowledge of cloud platforms (AWS, Azure, or GCP) and CI/CD for ML (MLOps) Familiarity with version control (Git), Docker, and data pipeline frameworks Prior experience handling end-to-end AI/ML projects Junior AI/ML Engineer (12 Years Experience) Hands-on experience in building and training ML models Good understanding of data handling, model evaluation, and algorithms Familiar with Python, NumPy, pandas, scikit-learn Exposure to deep learning frameworks (TensorFlow or PyTorch) is a plus Willingness to learn and grow within a fast-paced, collaborative team Preferred Skills (Good To Have) Experience with Computer Vision, NLP, or Recommendation Systems Familiarity with tools like MLflow, Airflow, or Kubernetes Public GitHub or Kaggle profile showing relevant projects What We Offer Competitive compensation Opportunity to work on cutting-edge AI projects Career growth in a tech-driven environment Health and wellness benefits A collaborative team culture based out of Bangalore
Posted 21 hours ago
34.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Skills: Python, PyTorch, TensorFlow, Time Series Analysis, Predictive Modeling, SQL, Location: Onsite Bangalore, Karnataka Company: VectorStack Open Positions: 4 (2 Mid-Level with 34 years of experience, 2 Junior-Level with 12 years of experience) Employment Type: Full-time Joining: Immediate (Urgent Requirement) About VectorStack VectorStack is a technology solutions provider that drives digital transformation and enhances business performance. With domain expertise across Tech Advancement, Design Innovation, Product Evolution, and Business Transformation, we deliver tailored strategies that yield measurable results. We serve sectors like Retail Tech, Ad Tech, FinTech, and EdTech, helping businesses unlock their full potential. Learn more at www.vectorstack.co. Role Overview We are urgently seeking dynamic and skilled AI/ML Engineers to join our growing team in Bangalore. You will work on a variety of AI and machine learning projects, ranging from building intelligent systems to deploying scalable models in real-time applications. Responsibilities Develop, train, and deploy machine learning models for real-world applications Handle data preprocessing, feature engineering, and model optimisation Collaborate with cross-functional teams (Data Science, Engineering, Product) Maintain and monitor model performance in production environments Document model workflows, experiments, and outcomes clearly Requirements Mid-Level AI/ML Engineer (34 years experience): Strong proficiency in Python and ML libraries (scikit-learn, TensorFlow, PyTorch, XGBoost) Experience deploying ML models via APIs (Flask, FastAPI) Working knowledge of cloud platforms (AWS, Azure, or GCP) and CI/CD for ML (MLOps) Familiarity with version control (Git), Docker, and data pipeline frameworks Prior experience handling end-to-end AI/ML projects Junior AI/ML Engineer (12 Years Experience) Hands-on experience in building and training ML models Good understanding of data handling, model evaluation, and algorithms Familiar with Python, NumPy, pandas, scikit-learn Exposure to deep learning frameworks (TensorFlow or PyTorch) is a plus Willingness to learn and grow within a fast-paced, collaborative team Preferred Skills (Good To Have) Experience with Computer Vision, NLP, or Recommendation Systems Familiarity with tools like MLflow, Airflow, or Kubernetes Public GitHub or Kaggle profile showing relevant projects What We Offer Competitive compensation Opportunity to work on cutting-edge AI projects Career growth in a tech-driven environment Health and wellness benefits A collaborative team culture based out of Bangalore
Posted 21 hours ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Overview TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place – one that benefits lives, communities and the planet Job Title: Specialty Development Practitioner Location: Chennai Work Type: Hybrid Position Description At the client's Credit Company, we are modernizing our enterprise data warehouse in Google Cloud to enhance data, analytics, and AI/ML capabilities, improve customer experience, ensure regulatory compliance, and boost operational efficiencies. As a GCP Data Engineer, you will integrate data from various sources into novel data products. You will build upon existing analytical data, including merging historical data from legacy platforms with data ingested from new platforms. You will also analyze and manipulate large datasets, activating data assets to enable enterprise platforms and analytics within GCP. You will design and implement the transformation and modernization on GCP, creating scalable data pipelines that land data from source applications, integrate into subject areas, and build data marts and products for analytics solutions. You will also conduct deep-dive analysis of Current State Receivables and Originations data in our data warehouse, performing impact analysis related to the client's Credit North America's modernization and providing implementation solutions. Moreover, you will partner closely with our AI, data science, and product teams, developing creative solutions that build the future for the client's Credit. Experience with large-scale solutions and operationalizing data warehouses, data lakes, and analytics platforms on Google Cloud Platform or other cloud environments is a must. We are looking for candidates with a broad set of analytical and technology skills across these areas and who can demonstrate an ability to design the right solutions with the appropriate combination of GCP and 3rd party technologies for deployment on Google Cloud Platform. Skills Required Big Query,, Data Flow, DataForm, Data Fusion, Dataproc, Cloud Composer, AIRFLOW, Cloud SQL, Compute Engine, Google Cloud Platform - Biq Query Experience Required GCP Data Engineer Certified Successfully designed and implemented data warehouses and ETL processes for over five years, delivering high-quality data solutions. 5+ years of complex SQL development experience 2+ experience with programming languages such as Python, Java, or Apache Beam. Experienced cloud engineer with 3+ years of GCP expertise, specializing in managing cloud infrastructure and applications into production-scale solutions. Big Query,, Data Flow, DataForm, Data Fusion, Dataproc, Cloud Composer, AIRFLOW, Cloud SQL, Compute Engine, Google Cloud Platform Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API, cloudbuild, App Engine, Apache Kafka, Pub/Sub, AI/ML, Kubernetes Experience Preferred In-depth understanding of GCP's underlying architecture and hands-on experience of crucial GCP services, especially those related to data processing (Batch/Real Time) leveraging Terraform, Big Query, Dataflow, Pub/Sub, Data form, astronomer, Data Fusion, DataProc, Pyspark, Cloud Composer/Air Flow, Cloud SQL, Compute Engine, Cloud Functions, Cloud Run, Cloud build and App Engine, alongside and storage including Cloud Storage DevOps tools such as Tekton, GitHub, Terraform, Docker. Expert in designing, optimizing, and troubleshooting complex data pipelines. Experience developing with microservice architecture from container orchestration framework. Experience in designing pipelines and architectures for data processing. Passion and self-motivation to develop/experiment/implement state-of-the-art data engineering methods/techniques. Self-directed, work independently with minimal supervision, and adapts to ambiguous environments. Evidence of a proactive problem-solving mindset and willingness to take the initiative. Strong prioritization, collaboration & coordination skills, and ability to simplify and communicate complex ideas with cross-functional teams and all levels of management. Proven ability to juggle multiple responsibilities and competing demands while maintaining a high level of productivity. Data engineering or development experience gained in a regulated financial environment. Experience in coaching and mentoring Data Engineers Project management tools like Atlassian JIRA Experience working in an implementation team from concept to operations, providing deep technical subject matter expertise for successful deployment. Experience with data security, governance, and compliance best practices in the cloud. Experience with AI solutions or platforms that support AI solutions Experience using data science concepts on production datasets to generate insights Experience Range 5+ years Education Required Bachelor's Degree TekWissen® Group is an equal opportunity employer supporting workforce diversity.
Posted 21 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The airflow job market in India is rapidly growing as more companies are adopting data pipelines and workflow automation. Airflow, an open-source platform, is widely used for orchestrating complex computational workflows and data processing pipelines. Job seekers with expertise in airflow can find lucrative opportunities in various industries such as technology, e-commerce, finance, and more.
The average salary range for airflow professionals in India varies based on experience levels: - Entry-level: INR 6-8 lakhs per annum - Mid-level: INR 10-15 lakhs per annum - Experienced: INR 18-25 lakhs per annum
In the field of airflow, a typical career path may progress as follows: - Junior Airflow Developer - Airflow Developer - Senior Airflow Developer - Airflow Tech Lead
In addition to airflow expertise, professionals in this field are often expected to have or develop skills in: - Python programming - ETL concepts - Database management (SQL) - Cloud platforms (AWS, GCP) - Data warehousing
As you explore job opportunities in the airflow domain in India, remember to showcase your expertise, skills, and experience confidently during interviews. Prepare well, stay updated with the latest trends in airflow, and demonstrate your problem-solving abilities to stand out in the competitive job market. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough