Jobs
Interviews

14352 Orchestration Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

It's fun to work in a company where people truly BELIEVE in what they're doing! We're committed to bringing passion and customer focus to the business. Job Description This role requires working from our local Hyderabad office 2-3x a week. ABOUT THE ROLE: We are seeking a talented individual to join our team as a Java Backend Developer. The Java Backend Developer is self-driven and has a holistic, big picture mindset in developing enterprise solutions. In this role, he/she will be responsible for designing modern domain-driven, event-driven Microservices architecture to host on public Cloud platforms (AWS) and integration with modern technologies such as Kafka for event management/streaming, Docker & Kubernetes for Containerization. You will also be responsible for developing and supporting applications in Billing, Collections, and Payment Gateway within the commerce and club management Platform include assisting with the support of existing services as well as designing and implementing new business solutions, application deployment utilizing a thorough understanding of applicable technology, tools, and existing designs. The work involves working with product teams, technical leads, business analysts, DBAs, infrastructure, and other cross-department teams to evaluate business needs and provide end-to-end technical solutions. WHAT YOU’LL DO: Acting as a Java Backend Developer in a development team; collaborate with other team members and contribute in all phases of Software Development Life Cycle (SDLC) Applying Domain Driven Design, Object Oriented Design, and proven Design Patterns Hand on coding and development following Secured Coding guidelines and Test-Driven Development Working with QA teams to conduct integrated (application and database) stress testing, performance analysis and tuning Support systems testing and migration of platforms and applications to production Making enhancements to existing web applications built using Java and Spring frameworks Ensure quality, security and compliance requirements are met Act as an escalation point for application support and troubleshooting Have passion for hands-on coding, putting the customer first, and delivering an exceptional and reliable product to ABC Fitness’s customers Taking up tooling, integrating with other applications, piloting new technology Proof of Concepts and leveraging the outcomes in the ongoing solution initiatives Curious to see where technology and the industry is going and constantly strive to keep up through personal projects Strong analytical skills with high attention to detail, accuracy, and expert in debugging issue, and root cause analysis Strong organizational, multi-tasking, and prioritizing skills WHAT YOU’LL NEED: Computer Science degree or equivalent work experience Work experience as a senior developer in a team environment 3+ years of application development and implementation experience 3+ years of Java experience 3+ years of Spring experience Work experience in an Agile development scrum team space Work experience creating or maintaining RESTful or SOAP web services Work Experience creating and maintaining Cloud enabled/cloud native distributed applications Knowledge of API Gateways and integration frameworks, containers, and container orchestration Knowledge and experience with system application troubleshooting, and quality assurance application testing A focus on delivering outcomes to customers, which encompass designing, coding, ensuring quality, and delivering changes to our customers AND IT’S GREAT TO HAVE: 2+ years of SQL experience Billing or Payment Processing industry experience Knowledge and understanding of DevOps principles Knowledge and understanding of Cloud computing, PaaS design principles and micro services and containers Knowledge and understanding of application or software security such as: web application penetration testing, secure code review, secure static code analysis Ability to simultaneously lead multiple projects Good verbal, written, and interpersonal communication skills WHAT’S IN IT FOR YOU: Purpose led company with a Values focused culture – Best Life, One Team, Growth Mindset Time Off – competitive PTO plans with 15 Earned accrued leave, 12 days Sick leave, and 12 days Casual leave per year 11 Holidays plus 4 Days of Disconnect – once a quarter, we take a collective breather and enjoy a day off together around the globe. #oneteam Group Mediclaim insurance coverage of INR 500,000 for employee + spouse, 2 kids, and parents or parent-in-laws, and including EAP counseling Life Insurance and Personal Accident Insurance Best Life Perk – we are committed to meeting you wherever you are in your fitness journey with a quarterly reimbursement Premium Calm App – enjoy tranquility with a Calm App subscription for you and up to 4 dependents over the age of 16 Support for working women with financial aid towards crèche facility, ensuring a safe and nurturing environment for their little ones while they focus on their careers. We’re committed to diversity and passion, and encourage you to apply, even if you don’t demonstrate all the listed skillsets! ABC’S COMMITMENT TO DIVERSITY, EQUALITY, BELONGING AND INCLUSION: ABC is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We are intentional about creating an environment where employees, our clients and other stakeholders feel valued and inspired to reach their full potential and make authentic connections. We foster a workplace culture that embraces each person’s diversity, including the extent to which they are similar or different. ABC leaders believe that an equitable and inclusive culture is not only the right thing to do, it is a business imperative. Read more about our commitment to diversity, equality, belonging and inclusion at abcfitness.com ABOUT ABC: ABC Fitness (abcfitness.com) is the premier provider of software and related services for the fitness industry and has built a reputation for excellence in support for clubs and their members. ABC is the trusted provider to boost performance and create a total fitness experience for over 41 million members of clubs of all sizes whether a multi-location chain, franchise or an independent gym. Founded in 1981, ABC helps over 31,000 gyms and health clubs globally perform better and more profitably offering a comprehensive SaaS club management solution that enables club operators to achieve optimal performance. ABC Fitness is a Thoma Bravo portfolio company, a private equity firm focused on investing in software and technology companies (thomabravo.com). If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!

Posted 1 day ago

Apply

4.0 years

0 Lacs

India

Remote

About Us We're an innovative startup at the forefront of voice AI technology, creating solutions that transform how brands, creators, and developers engage with their audiences. Our platform enables natural-sounding voice synthesis and generation for applications ranging from content creation to customer service and beyond. The Role We're seeking an experienced Fullstack Developer with strong expertise in both Ruby on Rails and React to join our growing team. In this role, you'll work across our entire technology stack to build and enhance features that power our voice AI platform. The ideal candidate combines backend reliability with frontend finesse and can work independently in a remote environment. What You'll Do Feature Development: Build end-to-end features that span both backend services and user interfaces Backend Development: Design and implement scalable Rails services, APIs, and database structures Frontend Implementation: Create responsive, intuitive React interfaces that showcase our voice AI capabilities API Integration: Develop and consume RESTful APIs that connect our frontend and backend systems Database Management: Design efficient database schemas and optimize queries for performance Testing & Quality: Implement comprehensive test suites for both backend and frontend code DevOps Collaboration: Work with our infrastructure team on deployment, monitoring, and scaling Performance Optimization: Identify and resolve bottlenecks across the full application stack Security Implementation: Ensure our applications follow security best practices Technical Documentation: Create and maintain documentation for our codebase and processes Who You Are Full-Stack Thinker: You understand how all components fit together from database to user interface Problem Solver: You enjoy tackling complex technical challenges across the entire stack Quality Focused: You write clean, maintainable code and value comprehensive testing User-Centered: You understand the impact of your work on end-users and optimize their experience Self-Motivated: You can work independently and take ownership of your projects Collaborative: You communicate effectively with team members across different disciplines Adaptable: You thrive in a fast-paced startup environment where priorities may shift Continuous Learner: You stay updated with development trends and are eager to expand your skills Requirements 4+ years of professional software development experience 3+ years of experience with Ruby on Rails in production environments 2+ years of experience with React in production environments Strong understanding of RESTful API design and implementation Experience with SQL databases (PostgreSQL preferred) and query optimization Proficiency in JavaScript, including ES6+ features Experience with frontend state management (Redux, Context API, etc.) Proficiency in writing automated tests for both backend (RSpec, Minitest) and frontend (Jest, RTL) Experience with version control systems (Git) and CI/CD pipelines Strong understanding of software design patterns and principles Excellent communication skills for remote collaboration Nice to Have Experience with Next.js framework Knowledge of TypeScript Experience with GraphQL Background working with audio processing or media-rich applications Experience with AI/ML systems integration Understanding of containerization (Docker) and orchestration (Kubernetes) Experience with AWS or other cloud platforms Knowledge of web accessibility standards (WCAG) What We Offer Competitive salary and equity package The opportunity to work with cutting-edge AI technology Flexible remote work environment Professional development opportunities Collaborative culture with a team of talented engineers How to Apply Please send your resume, GitHub profile or examples of your work, and a brief introduction to [email address]. We look forward to hearing from you! We're an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status.

Posted 1 day ago

Apply

5.0 years

0 Lacs

India

Remote

Company Description At MindWise, we are dedicated to revolutionizing the US healthcare industry through cutting-edge IT solutions. We provide tailored technology services that empower healthcare organizations to deliver better patient outcomes, enhance operational efficiency, and drive innovation. Our offerings include custom software development, healthcare IT consulting, and advanced healthcare analytics. Our team comprises experienced professionals who intimately understand the challenges and nuances of the US healthcare sector, and we believe in forging strong partnerships with our clients to deliver solutions that exceed expectations. Role Description This is a full-time remote role for an MLOps Engineer. The MLOps Engineer will be responsible for implementing and managing machine learning pipelines and infrastructure. Day-to-day tasks will include developing and maintaining scalable architecture for data processing and model deployment, collaborating with data scientists to optimize model performance, and ensuring the reliability and efficiency of machine learning solutions. The role also involves managing cloud-based resources and ensuring compliance with security and data protection standards. Key Responsibilities Cloud Infrastructure Management o Design, deploy, and maintain AWS resources (EC2, ECS, Elastic Beanstalk, Lambda, VPC, VPN) o Implement infrastructure-as-code using Terraform and Docker for consistent and reproducible deployments o Optimize cost, performance, and security of compute and storage solutions Database & Server Architecture o Manage production-grade RDS MySQL instances with high availability, security, and backups o Design scalable server-side infrastructure and ensure tight integration with Django-based services Job Scheduling & Data Pipelines o Build and monitor asynchronous task workflows with Celery, SQS, and SNS o Manage data processing pipelines, ensuring timely and accurate job execution and messaging Monitoring & Logging o Set up and maintain CloudWatch dashboards, alarms, and centralized logging for proactive incident detection and resolution Machine Learning & NLP Infrastructure o Support deployment of NLP models on SageMaker and Bedrock, and manage interaction with vector databases and LLMs o Assist in productionizing model endpoints, workflows, and monitoring pipelines CI/CD & Automation o Maintain and improve CI/CD pipelines using CircleCI o Ensure automated testing, deployment, and rollback strategies are reliable and efficient Healthcare Data Integration o Support ingestion and transformation of clinical data using HL7 standards, Mirth Connect, and Java-based parsing tools o Enforce data security and compliance best practices in handling PHI and other sensitive healthcare data Qualifications • 5+ years of experience in cloud infrastructure (preferably AWS) • Strong command of Python/Django and container orchestration using Docker • Proficiency with Terraform, infrastructure-as-code best practices • Experience in setting up and managing messaging systems (Celery, SQS, SNS) • Understanding of NLP or ML model operations in production environments • Familiarity with LLM frameworks, vector databases, and SageMaker workflows • Strong CI/CD skills (CircleCI preferred) • Ability to work independently and collaboratively across engineering and data science teams Nice to Have • Exposure to HIPAA compliance, SOC2, or healthcare regulatory requirements • Experience scaling systems in a startup or early-growth environment • Contributions to open-source or community infrastructure projects • Hands-on experience with HL7, Mirth Connect, and Java for healthcare interoperability is a big plus

Posted 1 day ago

Apply

13.0 years

0 Lacs

India

On-site

Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Experience: 13+ Years General experience with manufacturing and automation technologies and processes. General experience with IoT technologies and protocols. Strong expertise in cloud development, particularly with platforms like Azure, AWS, or Google Cloud. Proficiency in designing and implementing microservices architecture. Extensive experience with .Net Core for developing cloud-based applications. Familiarity with containerization technologies such as Docker and orchestration tools like Kubernetes. Knowledge of database technologies, both SQL and NoSQL. Problem-Solving Skills: Excellent analytical and problem-solving skills with the ability to troubleshoot complex issues. Communication Skills: Strong verbal and written communication skills, with the ability to explain technical concepts to non-technical stakeholders. Team Player: Ability to work effectively in a collaborative team environment. Adaptability: Ability to quickly learn and adapt to new technologies and industry trends. RESPONSIBILITIES: Understanding the client’s business use cases and technical requirements and be able to convert them into technical design which elegantly meets the requirements. Mapping decisions with requirements and be able to translate the same to developers. Identifying different solutions and being able to narrow down the best option that meets the client’s requirements. Defining guidelines and benchmarks for NFR considerations during project implementation Writing and reviewing design document explaining overall architecture, framework, and high-level design of the application for the developers Reviewing architecture and design on various aspects like extensibility, scalability, security, design patterns, user experience, NFRs, etc., and ensure that all relevant best practices are followed. Developing and designing the overall solution for defined functional and non-functional requirements; and defining technologies, patterns, and frameworks to materialize it Understanding and relating technology integration scenarios and applying these learnings in projects Resolving issues that are raised during code/review, through exhaustive systematic analysis of the root cause, and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 1 day ago

Apply

10.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Do you want to make an impact on patient health around the world? Do you thrive in a fast-paced environment that brings together scientific, clinical, and commercial domains through engineering, data science, and analytics? Then join Pfizer Digital’s Artificial Intelligence, Data, and Advanced Analytics organization (AIDA) where you can leverage cutting-edge technology to inform critical business decisions and improve customer experiences for our patients and physicians. Our collection of engineering, data science, and analytics professionals are at the forefront of Pfizer’s transformation into a digitally driven organization that leverages data science and advanced analytics to change patients’ lives. The Data Science Industrialization team within Data Science Solutions and Initiatives leads the scaling of data and insights capabilities - critical drivers and enablers of Pfizer’s digital transformation. As the AI and Data Science Production Deployment Lead, you will be a leader within the Data Science Industrialization team charged with driving the deployment of AI use cases and reusable components into full production. You will lead a global team and partner with cross-functional business stakeholders and Digital leaders to catalyze identification, design, iterative development, and continuous improvements of deployment processes to support production data science workflows and AI applications. Your team will define and implement standard processes for quality assurance, testing, data ops, model ops, and dev ops while also providing SDLC, support, platform engineering, and cloud engineering guidance as needed. In addition, you will be responsible for providing critical input into the AI ecosystem and platform strategy to promote self-service, drive productization and collaboration, and foster innovation. Your team will be accountable to key Pfizer business functions (including Pfizer Biopharma, R&D, PGS, Oncology, and Enabling Functions) for production deployments of data science workflows and AI solutions that support major business objectives across all of Pfizer’s core business units. Role Responsibilities Lead deployment of production AI solutions and reusable software components with automated self-monitoring QA/QC processes Implement QA and testing, data ops, model ops, and DevOps for data science workflow products, industrialized workflow accelerators, and best practices in the production deployment of scalable AI/ML analytic insights products Enforce best practices for QA and testing and SDLC production support to ensure reliability and availability of deployed software Act as a subject matter expert for production deployment processes of data science workflows, AI solutions, and reusable software components on cross functional teams in bespoke organizational initiatives by providing thought leadership and execution support Direct QA and testing, data ops and model ops, DevOps, platform and cloud engineering research, advance data science workflow CI/CD orchestration capabilities, drive improvements in automation and self-service production deployment processes, implement best practices, and contribute to the broader talent building framework by facilitating related trainings Set a vision, prioritize workstreams, and provide day-to-day leadership, supervision, and mentorship for a global team with technical & functional expertise that includes QA and testing, DevOps, data science, and operations Coach direct reports to adopt best practices, improve technical skills, develop an innovative mindset, and achieve professional growth through technical and organizational thought leadership Communicate value delivered through reusable AI components to end user functions (e.g., Chief Marketing Office, Biopharma Commercial and Medical Affairs) and evangelize innovative ideas of reusable & scalable development approaches/frameworks/methodologies to enable new ways of developing and deploying AI solutions Partner with other leaders within the Data Science Industrialization team to define team roadmap and drive impact by providing strategic and technical input including platform evolution, vendor scan, and new capability development Partner with AI use case development teams to ensure successful integration of reusable components into production AI solutions Partner with AIDA Platforms team on end to end capability integration between enterprise platforms and internally developed reusable component accelerators (API registry, ML library / workflow management, enterprise connectors) Partner with AIDA Platforms team to define best practices for production deployment of reusable components to identify and mitigate potential risks related to component performance, security, responsible AI, and resource utilization Basic Qualifications Bachelor’s degree in AI, data science, or engineering related area (Computer Engineering, Computer Science, Information Systems, Engineering or a related discipline) 10+ years of work experience in data science, or engineering, or operations for a diverse range of projects 2-3 years of hands-on experience leading data science or AI/ML deployment and operations teams Track record of managing stakeholder groups and effecting change Recognized by peers as an expert in production deployment and AI/ML ops with deep expertise in CI/CD and DevOps for monitoring and orchestration of data science workflows, and hands-on development Understands how to synthesize facts and information from varied data sources, both new and pre-existing, into clear insights and perspectives that can be understood by business stakeholders Clearly articulates expectations, capabilities, and action plans; actively listens with others’ frame of reference in mind; appropriately shares information with team; favorably influences people without direct authority Clearly articulates scope and deliverables of projects; breaks complex initiatives into detailed component parts and sequences actions appropriately; develops action plans and monitors progress independently; designs success criteria and uses them to track outcomes; engages with stakeholders throughout to ensure buy-in Manages projects with and through others; shares responsibility and credit; develops self and others through teamwork; comfortable providing guidance and sharing expertise with others to help them develop their skills and perform at their best; helps others take appropriate risks; communicates frequently with team members earning respect and trust of the team Experience in translating business priorities and vision into product/platform thinking, set clear directives to a group of team members with diverse skillsets, while providing functional & technical guidance and SME support Ability to manage projects from end-to-end, from requirements gathering through implementation, hypercare, and development of support processes to ensure longevity of solutions Demonstrated experience interfacing with internal and external teams to develop innovative data science solutions Strong understanding of data science development lifecycle (CRISP) Deep experience with CI/CD integration (e.g. GitHub, GitHub Actions or Jenkins) Deep understanding of MLOps principles and tech stack (e.g. MLFlow) Experience working in a cloud based analytics ecosystem (AWS, Snowflake, etc) Highly self-motivated to deliver both independently and with strong team collaboration Ability to creatively take on new challenges and work outside comfort zone Strong English communication skills (written & verbal) Preferred Qualifications Advanced degree in Data Science, Computer Engineering, Computer Science, Information Systems or related discipline Experience in solution architecture & design Experience in software/product engineering Strong hands-on skills for data and machine learning pipeline orchestration via Dataiku (DSS 10+) platform Hands on experience working in Agile teams, processes, and practices Pharma & Life Science commercial functional knowledge Pharma & Life Science commercial data literacy Experience with Dataiku Data Science Studio Ability to work non-traditional work hours interacting with global teams spanning across the different regions (eg: North America, Europe, Asia) Pfizer is an equal opportunity employer and complies with all applicable equal employment opportunity legislation in each jurisdiction in which it operates. Information & Business Tech

Posted 1 day ago

Apply

20.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Company Overview With 80,000 customers across 150 countries, UKG is the largest U.S.-based private software company in the world. And we’re only getting started. Ready to bring your bold ideas and collaborative mindset to an organization that still has so much more to build and achieve? Read on. At UKG, you get more than just a job. You get to work with purpose. Our team of U Krewers are on a mission to inspire every organization to become a great place to work through our award-winning HR technology built for all. Here, we know that you’re more than your work. That’s why our benefits help you thrive personally and professionally, from wellness programs and tuition reimbursement to U Choose — a customizable expense reimbursement program that can be used for more than 200+ needs that best suit you and your family, from student loan repayment, to childcare, to pet insurance. Our inclusive culture, active and engaged employee resource groups, and caring leaders value every voice and support you in doing the best work of your career. If you’re passionate about our purpose — people —then we can’t wait to support whatever gives you purpose. We’re united by purpose, inspired by you. Job Description We are seeking a seasoned Senior Director of Software Engineering with deep expertise in Data Platforms to lead and scale our data engineering organization. With deep industry experience, you will bring strategic vision, technical leadership, and operational excellence to drive innovation and deliver robust, scalable, and high-performing data solutions. You will partner closely with cross-functional teams to enable data-driven decision-making across the enterprise. Key Responsibilities Define and execute the engineering strategy for modern, scalable data platforms. Lead, mentor, and grow a high-performing engineering organization. Partner with product, architecture, and infrastructure teams to deliver resilient data solutions. Drive technical excellence through best practices in software development, data modeling, security, and automation. Oversee the design, development, and deployment of data pipelines, lakehouses, and real-time analytics platforms. Ensure platform reliability, availability, and performance through proactive monitoring and continuous improvement. Foster a culture of innovation, ownership, and continuous learning. 20+ years of experience in software engineering with a strong focus on data platforms and infrastructure. Proven leadership of large-scale, distributed engineering teams. Deep understanding of modern data architectures (e.g., data lakes, lakehouses, streaming, warehousing). Proficiency in cloud-native data platforms (e.g., AWS, Azure, GCP), big data ecosystems (e.g., Spark, Kafka, Hive), and data orchestration tools. Strong software development background with expertise in one or more languages such as Python, Java, or Scala. Demonstrated success in driving strategic technical initiatives and cross-functional collaboration. Strong communication and stakeholder management skills at the executive level. Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field (Ph.D. a plus). Where we’re going UKG is on the cusp of something truly special. Worldwide, we already hold the #1 market share position for workforce management and the #2 position for human capital management. Tens of millions of frontline workers start and end their days with our software, with billions of shifts managed annually through UKG solutions today. Yet it’s our AI-powered product portfolio designed to support customers of all sizes, industries, and geographies that will propel us into an even brighter tomorrow! UKG is proud to be an equal opportunity employer and is committed to promoting diversity and inclusion in the workplace, including the recruitment process. Disability Accommodation in the Application and Interview Process For individuals with disabilities that need additional assistance at any point in the application and interview process, please email UKGCareers@ukg.com

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

We deliver the world’s most complex projects. Work as part of a collaborative and inclusive team. Enjoy a varied & challenging role. Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, we’re bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role As a Applications Technical Specialist II with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc…. Ensure project deployment as per conceptual design documentation and architecture Collaborate with various Information Technology and business stakeholder groups to ensure deployed solutions meet all agreed upon criteria Primary resource responsible for development of enhancements & fixes, and ongoing support of ServiceNow HRSD Design, develop and implement service portal related enhancements/fixes with the ServiceNow HRSD module Develop integrations on ServiceNow platform to various modules ITSM, HRSD, Custom Apps, Etc., Build and maintain Service Catalogues/ Record Producers inclusive of workflow and Orchestration Create and maintain client scripts, business rules, UI Policies, widgets, service portal, jobs, etc. (JavaScript/HTML/CSS) Troubleshoot and resolve any potential technical application issues. Adhere to ServiceNow best practices (code best practices, update sets, table relationships, application customization, etc.) Adhere to Worley Change Management principles to ensure the stability of sub-production and production environments Proactive, responsive and focused on anticipating future requirements and/or issues Recover quickly after change, disruptions, or mistakes and can remain productive and focused. Is adaptable and can apply lessons learned in one situation to another situation. Develop clear and concise technical/process documentation Global Reports creation and administration with platform analytics or performance analytics features Provide HRSD application training to business teams and help desks (train the trainer) About You To be considered for this role it is envisaged you will possess the following attributes: Excellent interpersonal and presentation skills Fluent in spoken and written English 5+ years experience as a ServiceNow Administrator 5+ years experience using JavaScript in ServiceNow 5+ years experience as an administrator for ServiceNow Service Catalogs and Service Portal 5+ years experience using web services in ServiceNow (REST and SOAP) 5+ years experience integrating ServiceNow with other platforms via all available options (automated flat file loads and transform maps, web services, connectors, etc) Experience implementing and maintaining SLAs Experience using Integration Hub and Service Graph connectors Experience acting as an administrator for all ITSM modules Experience acting as the primary regression testing resource for a ServiceNow upgrade. Strong understanding of the Users, Groups, Roles, and Security Groups implementation in ServiceNow and the automated methods used to maintain them. Sound knowledge of industry standards and methodologies Broad understanding of software applications in use at Worley including but not limited to Peoplelink, Oracle eBusiness Suite, Windows Operating Systems, Citrix, Systems Centre Suite of Products, Active Directory, Azure, Office 365, SharePoint, MS Teams Ability to work with globally dispersed virtual teams across a number of disciplines with Finance Service Management, HAM, HRSD, ITOM applications (Discovery, Event Management, Operational Intelligence, Orchestration, Service Mapping, CMDB) highly desirable. Personal Qualities/Behaviours: Strong work ethic Detail oriented and able to solve problems with efficient troubleshooting. Self-driven and takes responsibility. Moving forward together We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. We’re building a diverse, inclusive and respectful workplace. Creating a space where everyone feels they belong, can be themselves, and are heard. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Worley takes personal data protection seriously and respects EU and local data protection laws. You can read our full Recruitment Privacy Notice Here. Please note: If you are being represented by a recruitment agency you will not be considered, to be considered you will need to apply directly to Worley. Company Worley Primary Location IND-MM-Mumbai Other Locations IND-KR-Bangalore, IND-AP-Hyderabad, IND-MM-Pune, IND-TN-Chennai, IND-MM-Navi Mumbai Job Applications Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 4, 2025 Unposting Date Aug 3, 2025 Reporting Manager Title Senior General Manager

Posted 1 day ago

Apply

130.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job Description Backfill leveraging a split of a position into two roles. Former employee was Lukas Ergt based in Prague. Shift of roles to India. HCGT #4605 is other position. Current Employees apply HERE Current Contingent Workers apply HERE Secondary Language(s) Job Description Senior Manager, AI Insights Engineering The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our companys’ IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview Are you driven to create real-world impact through applied AI, modern analytics, and intelligent automation and passionate about shaping the future of data and analytics in AI era? We are looking for an experienced and forward-thinking Senior AI Engineer / Full-Stack Developer in Data & Analytics to join our global team to help with our transformation from BI to AI. This role combines deep expertise in data engineering, BI/visualization, and AI/GenAI with a hands-on builder mindset. You will lead the design and delivery of next-generation analytics solutions , integrating semantic layers, streaming data, embedded insights, and AI agents. The ideal candidate is equally comfortable writing code, shaping architecture, experimenting with GenAI models, and mentoring teams—bridging BI, ML, engineering in a highly regulated, global pharma environment. What Will You Do In This Role Architect and build data and analytics solutions that align with our transofrmation from BI to AI focusing on Semantic models, governed self-service, and report cataloging Embedded analytics, feedback loops, and personalized insights AI/GenAI orchestration, including RAG, prompt chaining, and Copilot integrations Design and deliver data products and microservices combining BI tools (Power BI, Qlik, ThoughtSpot) Vector databases and LLM-powered APIs Real-time streaming and telemetry (Kafka, Fabric, Snowflake, Databricks) Build and deploy AI models for use cases like Semantic search and insight generation Report summarization and auto-commentary Decision-making agents and anomaly detection Collaborate across data engineering, business, product, and compliance teams to align on technical architecture, governance, and platform scalability. Lead proof-of-concepts, technical pilots, and full-stack AI solutions from ideation to production. Contribute to open discussions and communities of practice around ML Ops, AI tooling, metadata modeling, and observability. Continuously scan the AI/LLM landscape and identify innovative approaches that bring business value. What Should You Have Master degree in Computer Science, Data Engineering, or a related field. 10+ years of experience in data/analytics/AI roles, with demonstrated ability to build complex solutions end-to-end. Deep expertise in Python, SQL, and modern data engineering stacks (e.g., dbt, Snowflake, Databricks, Azure/AWS). Proficiency in AI/ML fundamentals neural networks, vector embeddings, prompt engineering, transformers, evaluation metrics. Hands-on experience with LLMs and GenAI frameworks (GPT-3.5/4, Llama, Claude, RAG, LangChain, Haystack, etc.). Experience building and deploying REST APIs, integrating AI into BI tools or operational systems. Strong background in BI platforms such as Power BI, Qlik, and ThoughtSpot, with experience in data modeling, DAX/set analysis, and performance tuning. Excellent communication skills; able to explain technical designs and model decisions to technical and non-technical stakeholders. Nice to have Experience with agent orchestration platforms, Copilot development, or autonomous decision agents. Familiarity with pharma or life sciences domain and its regulatory requirements (HIPAA, GxP, GDPR). Contributions to open-source AI/analytics tools or frameworks. Experience in multi-tenant SaaS architecture, metadata governance, or telemetry collection for usage-based optimization. Certifications in cloud (AWS, Azure), data engineering, or GenAI technologies. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Data Modeling, Design Applications, Release Management, Requirements Management, Solution Architecture, System Designs, Systems Integration Preferred Skills Job Posting End Date 08/15/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R356862

Posted 1 day ago

Apply

3.0 - 5.0 years

0 Lacs

Kochi, Kerala, India

On-site

Job Title: Software DevOps Engineer (3-5 Years Experience) or Senior Software DevOps Engineer (5-10 Years Experience) Responsibilities Job Description: Design, implement, and maintain CI/CD pipelines to ensure efficient and reliable software delivery. Collaborate with Development, QA, and Operations teams to streamline the deployment and operation of applications. Monitor system performance, identify bottlenecks, and troubleshoot issues to ensure high availability and reliability. Automate repetitive tasks and processes to improve efficiency and reduce manual intervention. Participate in code reviews and contribute to the improvement of best practices and standards. Implement and manage infrastructure as code (IaC) using Terraform. Document processes, configurations, and procedures for future reference. Stay updated with the latest industry trends and technologies to continuously improve DevOps processes. Create POC for the latest tools and technologies. Requirements Bachelor's degree in Computer Science, Information Technology, or a related field. 1-3 years of experience in a DevOps or related role. Proficiency with version control systems (e.g., Git). Experience with scripting languages (e.g., Python, Bash). Strong understanding of CI/CD concepts and tools (e.g., Azure DevOps, Jenkins, GitLab CI). Experience with cloud platforms (e.g., AWS, Azure, GCP). Familiarity with containerization technologies (e.g., Docker, Kubernetes). Basic understanding of networking and security principles. Strong problem-solving skills and attention to detail. Excellent communication and teamwork skills. Ability to learn and adapt to new technologies and methodologies. Ready to work with clients directly. Mandatory Skill Azure Cloud, Azure DevOps, CI\CD Pipeline, Version control (git) Linux Commands, Bash Script Docker, Kubernetes, Helm Charts Any Monitoring tools such as Grafana, Prometheus, ELK Stack, Azure Monitoring Azure, AKS, Azure Storage, Virtual Machine Understanding of micro-services architecture, orchestration, Sql Server. Optional Skill Ansible Script, Kafka, MongoDB Key Vault Azure Cli

Posted 1 day ago

Apply

3.0 years

18 Lacs

Mohali

On-site

Key Responsibilities: Design and develop full-stack web applications using the MERN (MongoDB, Express, React, Node.js) stack. Build RESTful APIs and integrate front-end and back-end systems. Deploy and manage applications using AWS services such as EC2, S3, Lambda, API Gateway, DynamoDB, CloudFront, RDS, etc. Implement CI/CD pipelines using AWS CodePipeline, CodeBuild, or other DevOps tools. Monitor, optimize, and scale applications for performance and availability. Ensure security best practices in both code and AWS infrastructure. Write clean, modular, and maintainable code with proper documentation. Work closely with product managers, designers, and QA to deliver high-quality products on schedule. Required Skills & Qualifications: Bachelor’s degree in Computer Science, Engineering, or related field (or equivalent experience). 3+ years of professional experience with MERN Stack development. Strong knowledge of JavaScript (ES6+), React.js (Hooks, Redux), and Node.js. Hands-on experience with MongoDB and writing complex queries and aggregations. Proficiency in deploying and managing applications on AWS. Experience with AWS services like EC2, S3, Lambda, API Gateway, RDS, CloudWatch, etc. Knowledge of Git, Docker, and CI/CD pipelines. Understanding of RESTful API design, microservices architecture, and serverless computing. Strong debugging and problem-solving skills. Preferred Qualifications: AWS Certification (e.g., AWS Certified Developer – Associate). Experience with Infrastructure as Code (IaC) using Terraform or AWS CloudFormation. Experience with GraphQL and WebSockets. Familiarity with container orchestration tools like Kubernetes or AWS ECS/EKS. Exposure to Agile/Scrum methodologies. Company overview: smartData is a leader in global software business space when it comes to business consulting and technology integrations making business easier, accessible, secure and meaningful for its target segment of startups to small & medium enterprises. As your technology partner, we provide both domain and technology consulting and our inhouse products and our unique productized service approach helps us to act as business integrators saving substantial time to market for our esteemed customers. With 8000+ projects, vast experience of 20+ years, backed by offices in the US, Australia, and India, providing next door assistance and round-the-clock connectivity, we ensure continual business growth for all our customers. Our business consulting and integrator services via software solutions focus on important industries of healthcare, B2B, B2C, & B2B2C platforms, online delivery services, video platform services, and IT services. Strong expertise in Microsoft, LAMP stack, MEAN/MERN stack with mobility first approach via native (iOS, Android, Tizen) or hybrid (React Native, Flutter, Ionic, Cordova, PhoneGap) mobility stack mixed with AI & ML help us to deliver on the ongoing needs of customers continuously. Job Type: Full-time Pay: Up to ₹1,800,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Work Location: In person

Posted 1 day ago

Apply

6.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Why This Role Matters: Data is the foundation of our business, and your work will ensure that we continue to deliver high-quality competitive intelligence at scale. Web platforms are constantly evolving, deploying sophisticated anti-bot measures—your job is to stay ahead of them. If you thrive on solving complex technical challenges and enjoy working with real-world data at an immense scale, this role is for you. We seek a Software Development Engineer with expertise in cloud infrastructure, Big Data and web crawling technologies. This role bridges system reliability engineering with scalable data extraction solutions, ensuring our infrastructure remains robust and capable of handling high-volume data collection. You will design resilient systems, optimize automation pipelines, and tackle challenges posed by advanced bot-detection mechanisms. Key Responsibilities: Architect, deploy, and manage scalable cloud environments (AWS/GCP/DO) to support distributed data processing solutions to handle terabyte-scale datasets and billions of records efficiently Automate infrastructure provisioning, monitoring, and disaster recovery using tools like Terraform, Kubernetes, and Prometheus. Optimize CI/CD pipelines to ensure seamless deployment of web scraping workflows and infrastructure updates. Develop and maintain stealthy web scrapers using Puppeteer, Playwright, and headless chromium browsers. Reverse-engineer bot-detection mechanisms (e.g., TLS fingerprinting, CAPTCHA solving) and implement evasion strategies. Monitor system health, troubleshoot bottlenecks, and ensure 99.99% uptime for data collection and processing pipelines. Implement security best practices for cloud infrastructure, including intrusion detection, data encryption, and compliance audits. Partner with data collection, ML and SaaS teams to align infrastructure scalability with evolving data needs Research emerging technologies to stay ahead of anti-bot trends including technologies like Kasada, PerimeterX, Akamai, Cloudflare, and more. Required Skills: 4–6 years of experience in site reliability engineering and cloud infrastructure management . Proficiency in Python, JavaScript for scripting and automation . Hands-on experience with Puppeteer/Playwright, headless browsers, and anti-bot evasion techniques . Knowledge of networking protocols, TLS fingerprinting, and CAPTCHA-solving frameworks . Experience with monitoring and observability tools such as Grafana, Prometheus, Elasticsearch, and familiarity with monitoring and optimizing resource utilization in distributed systems. Experience with data lake architectures and optimizing storage using formats such as Parquet, Avro, or ORC. Strong proficiency in cloud platforms (AWS, GCP, or Azure) and containerization/orchestration (Docker, Kubernetes). Deep understanding of infrastructure-as-code tools (Terraform, Ansible) . Deep experience in designing resilient data systems with a focus on fault tolerance, data replication, and disaster recovery strategies in distributed environments. Experience implementing observability frameworks, distributed tracing, and real-time monitoring tools. Excellent problem-solving abilities, with a collaborative mindset and strong communication skills.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

We deliver the world’s most complex projects. Work as part of a collaborative and inclusive team. Enjoy a varied & challenging role. Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, we’re bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role As a Applications Technical Specialist II with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc…. Ensure project deployment as per conceptual design documentation and architecture Collaborate with various Information Technology and business stakeholder groups to ensure deployed solutions meet all agreed upon criteria Primary resource responsible for development of enhancements & fixes, and ongoing support of ServiceNow HRSD Design, develop and implement service portal related enhancements/fixes with the ServiceNow HRSD module Develop integrations on ServiceNow platform to various modules ITSM, HRSD, Custom Apps, Etc., Build and maintain Service Catalogues/ Record Producers inclusive of workflow and Orchestration Create and maintain client scripts, business rules, UI Policies, widgets, service portal, jobs, etc. (JavaScript/HTML/CSS) Troubleshoot and resolve any potential technical application issues. Adhere to ServiceNow best practices (code best practices, update sets, table relationships, application customization, etc.) Adhere to Worley Change Management principles to ensure the stability of sub-production and production environments Proactive, responsive and focused on anticipating future requirements and/or issues Recover quickly after change, disruptions, or mistakes and can remain productive and focused. Is adaptable and can apply lessons learned in one situation to another situation. Develop clear and concise technical/process documentation Global Reports creation and administration with platform analytics or performance analytics features Provide HRSD application training to business teams and help desks (train the trainer) About You To be considered for this role it is envisaged you will possess the following attributes: Excellent interpersonal and presentation skills Fluent in spoken and written English 5+ years experience as a ServiceNow Administrator 5+ years experience using JavaScript in ServiceNow 5+ years experience as an administrator for ServiceNow Service Catalogs and Service Portal 5+ years experience using web services in ServiceNow (REST and SOAP) 5+ years experience integrating ServiceNow with other platforms via all available options (automated flat file loads and transform maps, web services, connectors, etc) Experience implementing and maintaining SLAs Experience using Integration Hub and Service Graph connectors Experience acting as an administrator for all ITSM modules Experience acting as the primary regression testing resource for a ServiceNow upgrade. Strong understanding of the Users, Groups, Roles, and Security Groups implementation in ServiceNow and the automated methods used to maintain them. Sound knowledge of industry standards and methodologies Broad understanding of software applications in use at Worley including but not limited to Peoplelink, Oracle eBusiness Suite, Windows Operating Systems, Citrix, Systems Centre Suite of Products, Active Directory, Azure, Office 365, SharePoint, MS Teams Ability to work with globally dispersed virtual teams across a number of disciplines with Finance Service Management, HAM, HRSD, ITOM applications (Discovery, Event Management, Operational Intelligence, Orchestration, Service Mapping, CMDB) highly desirable. Personal Qualities/Behaviours: Strong work ethic Detail oriented and able to solve problems with efficient troubleshooting. Self-driven and takes responsibility. Moving forward together We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. We’re building a diverse, inclusive and respectful workplace. Creating a space where everyone feels they belong, can be themselves, and are heard. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Worley takes personal data protection seriously and respects EU and local data protection laws. You can read our full Recruitment Privacy Notice Here. Please note: If you are being represented by a recruitment agency you will not be considered, to be considered you will need to apply directly to Worley. Company Worley Primary Location IND-MM-Mumbai Other Locations IND-KR-Bangalore, IND-AP-Hyderabad, IND-MM-Pune, IND-TN-Chennai, IND-MM-Navi Mumbai Job Applications Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 4, 2025 Unposting Date Aug 3, 2025 Reporting Manager Title Senior General Manager

Posted 1 day ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Reference # 325279BR Job Type Full Time Your role We are seeking a skilled and motivated Azure DevOps Engineer with intermediate experience to join our dynamic team. The ideal candidate will have a strong background in Azure cloud services, CI/CD pipeline management, automation, and Java development. As an Azure DevOps Engineer, you will play a crucial role in ensuring seamless deployment and operation of our cloud-based applications and services. Your team We are responsible for WMA (Wealth Management Americas) client facing technology applications. You’ll be working in the Manage my Relationships stream focusing on projects which are used by financial advisors. Your role will be accountable for design /implementation of technical solutions with in WMA and timely delivery of projects following agile / scrum SDLC. Our team is dedicated to creating innovative solutions that drive our organization's success. We foster a collaborative and supportive environment, where you can grow and excel in your role. Your expertise experience with dynamics 365 hands-on knowledge, building complex customizations and integrations 3+ years of experience in azure devops, cloud infrastructure, and automation preferably in the financial services industry. proficiency in azure services such as azure app services, azure functions, azure kubernetes service (aks), and azure sql database. strong knowledge of ci/cd concepts and tools, including azure pipelines, jenkins, or github actions. experience with infrastructure-as-code (iac) tools like arm templates, terraform, or ansible. solid understanding of source control systems, particularly git. familiarity with containerization and orchestration tools such as docker and kubernetes. proficiency in java development and understanding of java-based applications. About Us UBS is the world’s largest and the only truly global wealth manager. We operate through four business divisions: Global Wealth Management, Personal & Corporate Banking, Asset Management and the Investment Bank. Our global reach and the breadth of our expertise set us apart from our competitors.. We have a presence in all major financial centers in more than 50 countries. How We Hire We may request you to complete one or more assessments during the application process. Learn more Join us At UBS, we embrace flexible ways of working when the role permits. We offer different working arrangements like part-time, job-sharing and hybrid (office and home) working. Our purpose-led culture and global infrastructure help us connect, collaborate, and work together in agile ways to meet all our business needs. From gaining new experiences in different roles to acquiring fresh knowledge and skills, we know that great work is never done alone. We know that it's our people, with their unique backgrounds, skills, experience levels and interests, who drive our ongoing success. Together we’re more than ourselves. Ready to be part of #teamUBS and make an impact? Disclaimer / Policy Statements UBS is an Equal Opportunity Employer. We respect and seek to empower each individual and support the diverse cultures, perspectives, skills and experiences within our workforce. Your Career Comeback We are open to applications from career returners. Find out more about our program on ubs.com/careercomeback.

Posted 1 day ago

Apply

2.0 years

0 - 1 Lacs

India

Remote

About The Role Masai, in academic collaboration with a premier institute, is seeking a Teaching Assistant (TA) for its New Age Software Engineering program. This advanced 90-hour course equips learners with Generative AI foundations, production-grade AI engineering, serverless deployments, agentic workflows, and vision-enabled AI applications. The TA will play a key role in mentoring learners, resolving queries, sharing real-world practices, and guiding hands-on AI engineering projects. This role is perfect for professionals who want to contribute to next-generation AI-driven software engineering education while keeping their technical skills sharp. Key Responsibilities (KRAs) Doubt-Solving Sessions Conduct or moderate weekly sessions to clarify concepts across: Generative AI & Prompt Engineering AI Lifecycle Management & Observability Serverless & Edge AI Deployments Agentic Workflows and Vision-Language Models (VLMs) Share industry insights and practical examples to reinforce learning. Q&A and Discussion Forum Support Respond to student questions through forums, chat, or email with detailed explanations and actionable solutions. Facilitate peer-to-peer discussions on emerging tools, frameworks, and best practices in AI engineering. Research & Project Support Assist learners in capstone project design and integration, including vector databases, agent orchestration, and performance tuning. Collaborate with the academic team to research emerging AI frameworks like LangGraph, CrewAI, Hugging Face models, and WebGPU deployments. Learner Engagement Drive engagement via assignment feedback, interactive problem-solving, and personalized nudges to keep learners motivated. Encourage learners to adopt best practices for responsible and scalable AI engineering. Content Feedback Loop Collect learner feedback and recommend updates to curriculum modules for continuous course improvement. Candidate Requirements 2+ years of experience in Software Engineering, AI Engineering, or Full-Stack Development. Strong knowledge of Python/Node.js, cloud platforms (AWS Lambda, Vercel, Cloudflare Workers), and modern AI tools. Hands-on experience with LLMs, Vector Databases (Pinecone, Weaviate), Agentic Frameworks (LangGraph, ReAct), and AI observability tools. Understanding of AI deployment, prompt engineering, model fine-tuning, and RAG pipelines. Excellent communication and problem-solving skills; mentoring experience is a plus. Familiarity with online learning platforms or LMS tools is advantageous. Engagement Details Time Commitment: 6 to 8 hours per week Location: Remote (online) Compensation: ₹8,000 to ₹10,000 per month Why Join Us? Benefits and Perks Contribute to a cutting-edge AI & software engineering program with a leading ed-tech platform. Mentor learners on next-generation AI applications and engineering best practices. Engage in flexible remote working while influencing future technological innovations. Access to continuous professional development and faculty enrichment programs. Network with industry experts and professionals in the AI and software engineering domain. Skills: edge,llms,rag pipelines,communication,online,aws lambda,databases,cloudflare workers,ai observability tools,vercel,prompt,learning,model fine-tuning,vector databases,prompt engineering,software,new age,agentic frameworks,mentoring,problem-solving,python,models,learners,node.js

Posted 1 day ago

Apply

3.0 years

0 Lacs

Hyderābād

Remote

Our fast-paced and collaborative environment inspires us to create, think, and challenge each other in ways that make our solutions and our teams better. Whether you’re interested in engineering or development, marketing or sales, or something else – if this sounds like you, then we’d love to hear from you! We are headquartered in Denver, Colorado, with offices in the US, Canada, and India. Vertafore is a leading technology company whose innovative software solution are advancing the insurance industry. Our suite of products provides solutions to our customers that help them better manage their business, boost their productivity and efficiencies, and lower costs while strengthening relationships. Our mission is to move InsurTech forward by putting people at the heart of the industry. We are leading the way with product innovation, technology partnerships, and focusing on customer success. Our fast-paced and collaborative environment inspires us to create, think, and challenge each other in ways that make our solutions and our teams better. We are headquartered in Denver, Colorado, with offices across the U.S., Canada, and India. JOB DESCRIPTION We are seeking a highly motivated ServiceNow expert to be part of our IT Service Management team. The ideal candidate should possess relevant experience and be ready to hit the ground running on daily administration, support, and automation of workflows utilizing the platform and its integration functionalities. Besides being the ServiceNow Subject Matter Expert, the Technical Lead ServiceNow Administrator will own and see through resolutions for requests and issues related to the platform. Core Requirements and Responsibilities: Essential job functions include but are not limited to the following: Develop workflows, business rules, UI rules, form updates, and other platform features in a proficient manner to tailor ServiceNow to the needs of the organization Constantly improving workflow orchestrations with ServiceNow based on ITIL to support efficiency of incident, problem, change, task, project, and resource management Should have minimum of 3 years' experience in program/script that helps in integrating ServiceNow with different systems, perform routine automation. Collaborating with ServiceNow contacts on a regular basis and staying up to date on platform updates, upcoming new features, and pertinent security issues. Create, maintain, and monitor health of integrations with other systems including but not limited to Salesforce and Rally Upkeep a current and easily accessible Service Catalog Build and maintain up-to-date configuration management database (CMDB) using asset management in ServiceNow Monitor platform performance daily, assist end users, fix problems, and provide training when needed Make sure security and compliance are met with user roles, permissions, and data protection in ServiceNow. Adopt security best practices when designing workflow orchestration and relevant automations Continuous platform improvements include finding and fixing configuration gaps, data inconsistency problems, and unused feature correction. Follow through on all improvement-related action items Experienced or understand ServiceNow Portfolio Management. Implement, configure and support Strategic Portfolio Management in ServiceNow. Plan and carry out platform upgrades. Includes preparing for end-user impacting platform modifications or improvements Create and keep up-to-date thorough documentation for runbooks, processes, and configurations Collaborating alongside ServiceNow contacts on a regular basis is necessary to stay up to date on platform updates, upcoming new features, and pertinent security issues Conduct testing of all ServiceNow modifications in the lower environment prior to the rollout in production to enforce low-risk platform changes Periodic audit of user licenses to ensure usage is under control Partner with other teams to take advantage of any ServiceNow automation opportunity Adhere to Vertafore Change Management policies for code deployments Why Vertafore is the place for you: *Canada Only The opportunity to work in a space where modern technology meets a stable and vital industry Medical, vision & dental plans Life, AD&D Short Term and Long Term Disability Pension Plan & Employer Match Maternity, Paternity and Parental Leave Employee and Family Assistance Program (EFAP) Education Assistance Additional programs - Employee Referral and Internal Recognition Why Vertafore is the place for you: *US Only The opportunity to work in a space where modern technology meets a stable and vital industry We have a Flexible First work environment! Our North America team members use our offices for collaboration, community and team-building, with members asked to sometimes come into an office and/or travel depending on job responsibilities. Other times, our teams work from home or a similar environment. Medical, vision & dental plans PPO & high-deductible options Health Savings Account & Flexible Spending Accounts Options: Health Care FSA Dental & Vision FSA Dependent Care FSA Commuter FSA Life, AD&D (Basic & Supplemental), and Disability 401(k) Retirement Savings Plain & Employer Match Supplemental Plans - Pet insurance, Hospital Indemnity, and Accident Insurance Parental Leave & Adoption Assistance Employee Assistance Program (EAP) Education & Legal Assistance Additional programs - Tuition Reimbursement, Employee Referral, Internal Recognition, and Wellness Commuter Benefits (Denver) The selected candidate must be legally authorized to work in the United States. The above statements are intended to describe the general nature and level of work being performed by people assigned to this job. They are not intended to be an exhaustive list of all the job responsibilities, duties, skill, or working conditions. In addition, this document does not create an employment contract, implied or otherwise, other than an "at will" relationship. Vertafore strongly supports equal employment opportunity for all applicants regardless of race, color, religion, sex, gender identity, pregnancy, national origin, ancestry, citizenship, age, marital status, physical disability, mental disability, medical condition, sexual orientation, genetic information, or any other characteristic protected by state or federal law. The Professional Services (PS) and Customer Success (CX) bonus plans are a quarterly monetary bonus plan based upon individual and practice performance against specific business metrics. Eligibility is determined by several factors including: start date, good standing in the company, and actives status at time of payout. The Vertafore Incentive Plan (VIP) is an annual monetary bonus for eligible employees based on both individual and company performance. Eligibility is determined by several factors including: start date, good standing in the company, and actives status at time of payout. Commission plans are tailored to each sales role but common components include quota, MBO's and ABPMs. Salespeople receive their formal compensation plan within 30 days of hire. Vertafore is a drug free workplace and conducts preemployment drug and background screenings. We do not accept resumes from agencies, headhunters or other suppliers who have not signed a formal agreement with us. We want to make sure our recruiting process is accessible for everyone. if you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact recruiting@vertafore.com Just a note, this contact information is for accommodation requests only. Knowledge, Skills and Abilities: Around 6 years of outstanding practical experience orchestrating workflows using ServiceNow Application Engine, establishing, and maintaining Integration Hub with other systems Advanced knowledge in forms and features for Service Catalog, Incident, Problem, Change, and Projects in ServiceNow Strong knowledge with ServiceNow Asset Management to manage CMDB Great grasp on ITIL framework and best practices Exceptional problem solver and solution focused when handling simple to complex issues Strong understanding and exposure in enforcing development lifecycle when working on ServiceNow enhancements Expert knowledge of the latest ServiceNow features Other scripting experience such as JavaScript is a plus Excellent communication interpersonal skills with ability to work with others from diverse backgrounds Established time management skills and the ability to juggle against multiple tasks with an enthusiastic sense of urgency and capability to meet deadlines Able to maintain professional composure in any situation Strong organizational and planning skills, ability to work independently to deliver consistent results Qualifications : Bachelor’s or Master’s degree in Computer Science, Engineering, or equivalent combination of education and working ServiceNow Administrator experience required ServiceNow Certified Systems Administrator or higher certification

Posted 1 day ago

Apply

5.0 years

1 - 3 Lacs

Hyderābād

On-site

Job Description Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.

Posted 1 day ago

Apply

12.0 - 16.0 years

2 - 9 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions We are currently seeking an experienced professional to join our team in the role of Consultant Specialist 12 - 16 years of experience with below requirements and skills: Advanced SQL Development: Write complex SQL queries for data extraction, transformation, and analysis. Optimize SQL queries for performance and scalability. SQL Tuning and Joins: Analyze and improve query performance. Deep understanding of joins, indexing, and query execution plans. GCP BigQuery and GCS: Work with Google BigQuery for data warehousing and analytics. Manage and integrate data using Google Cloud Storage (GCS). Airflow DAG Development: Design, develop, and maintain workflows using Apache Airflow. Write custom DAGs to automate data pipelines and processes. Python Programming: Develop and maintain Python scripts for data processing and automation. Debug and optimize Python code for performance and reliability. Shell Scripting: Write and debug basic shell scripts for automation and system tasks. Continuous Learning: Stay updated with the latest tools and technologies in data engineering. Demonstrate a strong ability and attitude to learn and adapt quickly. Communication: Collaborate effectively with cross-functional teams. Clearly communicate technical concepts to both technical and non-technical stakeholders. Requirements To be successful in this role, you should meet the following requirements: Advanced SQL writing and query optimization. Strong understanding of SQL tuning, joins, and indexing. Hands-on experience with GCP services, especially BigQuery and GCS. Proficiency in Python programming and debugging. Experience with Apache Airflow and DAG development. Basic knowledge of shell scripting. Excellent problem-solving skills and a growth mindset. Strong verbal and written communication skills. Experience with data pipeline orchestration and ETL processes. Familiarity with other GCP services like Dataflow or Pub/Sub. Knowledge of CI/CD pipelines and version control (e.g., Git). You’ll achieve more when you join HSBC www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website Issued by – HSBC Software Development India

Posted 1 day ago

Apply

9.0 years

3 - 8 Lacs

Hyderābād

On-site

Job Description Overview We are looking for a self-driven, software engineering mindset SRE support engineer enabling an SRE-driven orchestration of all components of the end2end ecosystem & preemptively diagnosing anomalies and remediating through automation. The SRE support engineer is integral part of the global team with its main purpose to provide a delightful customer experience for the user of the global consumer, commercial, supply chain and enablement functions in the PepsiCo digital products application portfolio of 260+ applications, enabling a full SRE Practice incident prevention / proactive resolution model. The scope of this role is focussed on the Modern architected application portfolio, B2B pepsiconnect and Direct to Customer and other S&T roadmap applications. Ensures that PepsiCo DPA applications service performance,reliability and availability expected by our customers and internal groups It requires a blend of technical expertise on SRE tools, modern applications arhictecture, IT operations experience, and analytics & influence skills. Responsibilities Reporting directly to the SRE & Modern Operations Associate Director, is responsible to enable & execute the pre-emptive diagnosis of PepsiCo applications towards service performance, reliability and availability expected by our customers and internal groups Responsible as pro-active support engineer, diagnosing any anomalies prior to any user and driving the necessary remediations across the teams involved. Develop / leverage aggregation correlation solutions that integrates events across all eco system component of the modern architecture solution and comes up with insights to continuously improve the user journey and order flow experience collaborating with software engineering teams. Drive incident response, root cause analysis (RCA), and post-mortem processes to ensure continuous improvement. Develop and maintain robust monitoring, alerting, and observability frameworks using tools like Grafana, ELK, etc. Collaborate with product and engineering teams during the design and development phases to embed reliability and operability into new services. Participate in architecture reviews and provide SRE input on scalability, fault tolerance, and deployment strategies. Define and implement SLOs/SLIs for new services before they go live, ensuring alignment with business objectives. Work closely with customer facing support teams to evolve & empower them with SRE insights Participate in on-call support and orchestrating blameless post-mortems and encourage the practice within the organization Provides inputs to the definition, collection and analysis of data relevant products systems and their interactions towards business process resiliency especially related impacting customer satisfaction, Actively engage and drive AI Ops adoption across teams Qualifications 9-11 years of work experience evolving to a SRE engineer with 3-5 years of experience in continuously improving and transforming IT operations ways of working Bachelor’s degree in Computer Science, Information Technology or a related field The ideal Engineer will be highly quantitative, have great judgment, able to connect dots across ecosytems, and efficiently work cross-functionally across teams to ensure SRE orchestrating solutions are meeting customer/end-user expectations The candidate will take a pragmatic approach resolving incidents, including the ability to systemically triangulate root causes and work effectively with external and internal teams to meet objectives. A firm understanding of SRE (Software Reliability Engineering) and IT Service Management (ITSM) processes with a track record for improving service offerings – pro-actively resolving incidents, providing a seamless customer/end-user experience and proactively identifying and mitigating areas of risk. Proven experience as an SRE in designing the events diagnostics, performance measures and alert solutions to meet the SLA/SLO/SLIs. Hands on experience in Python, SQL, relational or non-relational DBs, AppDynamics, Grafana, Splunk, Dynatrace, or other SRE Ops toolsets. Deep hands-on technical expertise, excellent verbal and written communication skills Differentiating Competencies Driving for Results: Demonstrates perseverance and resilience in the pursuit of goals. Confronts and works to resolve tough issues. Exhibits a “can-do” attitude and a willingness to take on significant challenges Decision Making: Quickly analyses complex problems to find actionable, pragmatic solutions. Sees connections in data, events, trends, etc. Consistently works against the right priorities Collaborating: Collaborates well with others to deliver results. Keeps others informed so there are no unnecessary surprises. Effectively listens to and understands what other people are saying. Communicating and Influencing: Ability to build convincing, persuasive, and logical storyboards. Strong executive presence. Able to communicate effectively and succinctly, both verbally and on paper. Motivating and Inspiring Others: Demonstrates a sense of passion, enjoyment, and pride about their work. Demonstrates a positive attitude in the workplace. Embraces and adapts well to change. Creates a work environment that makes work rewarding and enjoyable.

Posted 1 day ago

Apply

5.0 - 10.0 years

0 Lacs

Hyderābād

On-site

Job Description Overview DataOps L3 The role will leverage & enhance existing technologies in the area of data and analytics solutions like Power BI, Azure data engineering technologies, ADLS, ADB, Synapse, and other Azure services. The role will be responsible for developing and support IT products and solutions using these technologies and deploy them for business users Responsibilities 5 to 10 Years of IT & Azure Data engineering technologies experience Prior experience in ETL, data pipelines, data flow techniques using Azure Data Services Working experience in Python, Py Spark, Azure Data Factory, Azure Data Lake Gen2, Databricks, Azure Synapse and file formats like JSON & Parquet. Experience in creating ADF Pipelines to source and process data sets. Experience in creating Databricks notebooks to cleanse, transform and enrich data sets. Development experience in orchestration of pipelines Good understanding about SQL, Databases, Datawarehouse systems preferably Teradata Experience in deployment and monitoring techniques. Working experience with Azure DevOps CI/CD pipelines to deploy Azure resources. Experience in handling operations/Integration with source repository Must have good knowledge on Datawarehouse concepts and Datawarehouse modelling. Working knowledge of SNOW including resolving incidents, handling Change requests /Service requests, reporting on metrics to provide insights. Collaborate with the project team to understand tasks to model tables using data warehouse best practices and develop data pipelines to ensure the efficient delivery of data. Strong expertise in performance tuning and optimization of data processing systems. Proficient in Azure Data Factory, Azure Databricks, Azure SQL Database, and other Azure data services. Develop and enforce best practices for data management, including data governance and security. Work closely with cross-functional teams to understand data requirements and deliver solutions that meet business needs. Proficient in implementing DataOps framework. Qualifications Azure data factory Azure Databricks Azure Synapse PySpark/SQL ADLS Azure DevOps with CI/CD implementation. Nice-to-Have Skill Sets: Business Intelligence tools (preferred—Power BI) DP-203 Certified.

Posted 1 day ago

Apply

7.0 years

0 Lacs

India

On-site

About Us: MatchMove is a leading embedded finance platform that empowers businesses to embed financial services into their applications. We provide innovative solutions across payments, banking-as-a-service, and spend/send management, enabling our clients to drive growth and enhance customer experiences. Are You The One? As a Technical Lead Engineer - Data, you will architect, implement, and scale our end-to-end data platform built on AWS S3, Glue, Lake Formation, and DMS. You will lead a small team of engineers while working cross-functionally with stakeholders from fraud, finance, product, and engineering to enable reliable, timely, and secure data access across the business. You will champion best practices in data design, governance, and observability, while leveraging GenAI tools to improve engineering productivity and accelerate time to insight. You will contribute to: Owning the design and scalability of the data lake architecture for both streaming and batch workloads, leveraging AWS-native services. Leading the development of ingestion, transformation, and storage pipelines using AWS Glue, DMS, Kinesis/Kafka, and PySpark. Structuring and evolving data into OTF formats (Apache Iceberg, Delta Lake) to support real-time and time-travel queries for downstream services. Driving data productization, enabling API-first and self-service access to curated datasets for fraud detection, reconciliation, and reporting use cases. Defining and tracking SLAs and SLOs for critical data pipelines, ensuring high availability and data accuracy in a regulated fintech environment. Collaborating with InfoSec, SRE, and Data Governance teams to enforce data security, lineage tracking, access control, and compliance (GDPR, MAS TRM). Using Generative AI tools to enhance developer productivity — including auto-generating test harnesses, schema documentation, transformation scaffolds, and performance insights. Mentoring data engineers, setting technical direction, and ensuring delivery of high-quality, observable data pipelines. Responsibilities: Architect scalable, cost-optimized pipelines across real-time and batch paradigms, using tools such as AWS Glue, Step Functions, Airflow, or EMR. Manage ingestion from transactional sources using AWS DMS, with a focus on schema drift handling and low-latency replication. Design efficient partitioning, compression, and metadata strategies for Iceberg or Hudi tables stored in S3, and cataloged with Glue and Lake Formation. Build data marts, audit views, and analytics layers that support both machine-driven processes (e.g. fraud engines) and human-readable interfaces (e.g. dashboards). Ensure robust data observability with metrics, alerting, and lineage tracking via OpenLineage or Great Expectations. Lead quarterly reviews of data cost, performance, schema evolution, and architecture design with stakeholders and senior leadership. Enforce version control, CI/CD, and infrastructure-as-code practices using GitOps and tools like Terraform. Requirements: At-least 7 years of experience in data engineering. Deep hands-on experience with AWS data stack: Glue (Jobs & Crawlers), S3, Athena, Lake Formation, DMS, and Redshift Spectrum. Expertise in designing data pipelines for real-time, streaming, and batch systems, including schema design, format optimization, and SLAs. Strong programming skills in Python (PySpark) and advanced SQL for analytical processing and transformation. Proven experience managing data architectures using open table formats (Iceberg, Delta Lake, Hudi) at scale. Understanding of stream processing with Kinesis/Kafka and orchestration via Airflow or Step Functions. Experience implementing data access controls, encryption policies, and compliance workflows in regulated environments. Ability to integrate GenAI tools into data engineering processes to drive measurable productivity and quality gains — with strong engineering hygiene. Demonstrated ability to lead teams, drive architectural decisions, and collaborate with cross-functional stakeholders. Brownie Points: Experience working in a PCI DSS or any other central bank regulated environment with audit logging and data retention requirements. Experience in the payments or banking domain, with use cases around reconciliation, chargeback analysis, or fraud detection. Familiarity with data contracts, data mesh patterns, and data as a product principles. Experience using GenAI to automate data documentation, generate data tests, or support reconciliation use cases. Exposure to performance tuning and cost optimization strategies in AWS Glue, Athena, and S3. Experience building data platforms for ML/AI teams or integrating with model feature stores. MatchMove Culture: We cultivate a dynamic and innovative culture that fuels growth, creativity, and collaboration. Our fast-paced fintech environment thrives on adaptability, agility, and open communication. We focus on employee development, supporting continuous learning and growth through training programs, learning on the job and mentorship. We encourage speaking up, sharing ideas, and taking ownership. Embracing diversity, our team spans across Asia, fostering a rich exchange of perspectives and experiences. Together, we harness the power of fintech and e-commerce to make a meaningful impact on people's lives. Personal Data Protection Act: By submitting your application for this job, you are authorizing MatchMove to: collect and use your personal data, and to disclose such data to any third party with whom MatchMove or any of its related corporation has service arrangements, in each case for all purposes in connection with your job application, and employment with MatchMove; and retain your personal data for one year for consideration of future job opportunities (where applicable).

Posted 1 day ago

Apply

5.0 years

2 - 7 Lacs

Hyderābād

Remote

At Meazure Learning, we believe in transforming learning and assessment experiences to unlock human potential. As a global leader in online testing and exam services, we support credentialing, licensure, workforce education, and higher education through purpose-built solutions that are secure, accessible, and deeply human-centered. With a global footprint across the U.S., Canada, India, and the U.K., our team is united by a passion for innovation and a commitment to integrity, quality, and learner success. About the Role We are looking for a seasoned Sr. DevOps Engineer to help us scale, secure, and optimize our infrastructure and deployment processes. This role is critical to enabling fast, reliable, and high-quality software delivery across our global engineering teams. You’ll be responsible for designing and maintaining cloud-based systems, automating operational workflows, and collaborating across teams to improve performance, observability, and uptime. The ideal candidate is hands-on, proactive, and passionate about creating resilient systems that support product innovation and business growth. Join Us and You’ll… Help define and elevate the user experience for learners and professionals around the world Collaborate with talented, mission-driven colleagues across regions Work in a culture that values trust, innovation, and transparency Have the opportunity to grow, lead, and make your mark in a high-impact, global organization Key Responsibilities Design, implement, and maintain scalable, secure, and reliable CI/CD pipelines Manage and optimize cloud infrastructure (e.g., AWS, Azure) and container orchestration (e.g., Kubernetes) Drive automation across infrastructure and development workflows Build and maintain monitoring, alerting, and logging systems to ensure reliability and observability Collaborate with Engineering, QA, and Security teams to deliver high-performing, compliant solutions Troubleshoot complex system issues in staging and production environments Guide and mentor junior engineers and contribute to DevOps best practices Desired Attributes: Key Skills 5+ years of experience in a DevOps or Site Reliability Engineering role Deep knowledge of cloud infrastructure (AWS, Azure, or GCP) Proficiency with containerization (Docker, Kubernetes) and Infrastructure as Code tools (Terraform, CloudFormation) Hands-on experience with writing code Hands-on experience with CI/CD platforms (Jenkins, GitHub Actions, or similar) Strong scripting capabilities (Bash, Python, or PowerShell) Familiarity with monitoring and logging tools (Prometheus, Grafana, ELK, or Datadog) A problem-solver with excellent communication and collaboration skills The Total Rewards - The Benefits Company Sponsored Health Insurance Competitive Pay Healthy Work Culture Career Growth Opportunities Learning and Development Opportunities Referral Award Program Company Provided IT Equipment (for remote team members) Transportation Program (on-site team members) Company Provided Meals (on-site team members) 14 Company Provided Holidays Generous Leave Program Learn more at www.meazurelearning.com Meazure Learning is an Equal Opportunity Employer and Prohibits Discrimination and Harassment of Any Kind: Meazure Learning is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. All employment decisions at Meazure Learning are based on business needs, job requirements and individual qualifications, without regard to race, color, religion or belief, national, social or ethnic origin, sex (including pregnancy), age, physical, mental or sensory disability, HIV Status, sexual orientation, gender identity and/or expression, marital, civil union or domestic partnership status, past or present military service, family medical history or genetic information, family or parental status, or any other status protected by the laws or regulations in the locations where we operate. Meazure Learning will not tolerate discrimination or harassment based on any of these characteristics.

Posted 1 day ago

Apply

15.0 years

3 - 7 Lacs

Hyderābād

On-site

Job Description Overview PepsiCo is seeking a strategic and visionary Generative AI Solutions leader to lead transformative AI initiatives across Consumer, Commercial, and Reporting functions. This role will focus on designing scalable AI-driven business solutions, driving global change management, and aligning AI initiatives to enterprise goals. The ideal candidate brings deep domain experience, cross-functional leadership, and the ability to translate AI capabilities into measurable business outcomes—without managing the underlying AI platforms. Responsibilities AI Transformation Strategy & Road mapping Lead the definition and execution of enterprise-wide strategies for Consumer AI, Commercial AI, and Reporting AI use cases. Identify, prioritize, and solution complex AI-powered business opportunities aligned with PepsiCo's digital agenda. Translate market trends, AI capabilities, and business needs into an actionable Generative AI roadmap. Solution Design & Cross-Functional Orchestration Drive cross-functional solutioning using Gen-AI and Agentic AI capabilities and platforms of PepsiCo. Collaborate with business, data, and engineering teams to craft impactful AI agent-based solutions for Commercial and consumer facing functions including Marketing & R&D. Architect and design future AI solutions leveraging agentic frameworks. Collaborate with engineering teams to provide the necessary features for building. Work closely with Enterprise Architecture and Cloud Architecture teams to build scalable architecture. Leadership, Influence, and Governance Act as the face of Generative AI solutioning for senior executives and transformation leaders. Drive alignment across global and regional teams for solution design, prioritization, and scale-up. Provide technical leadership and mentorship to the AI engineering team. Stay up to date with the latest advancements in AI and related technologies. Drive innovation and continuous improvement in AI platform development. Ensure solutions meet enterprise standards for Responsible AI, data privacy, and business continuity. Qualifications 15+ years of experience in enterprise AI, digital transformation, or solution architecture, with a track record of leading AI-powered business programs. Candidates must hold a BE/B.Tech/M.Tech/MS degree (Full-time) in Engineering or a related technical field. Strong understanding of Consumer/Commercial business functions and how to apply AI to transform them (sales, marketing, supply chain, insights, reporting). Demonstrated experience designing Gen-AI or multi-agent solutions, using orchestration frameworks like LangGraph, CrewAI, AutoGen, or Temporal. Deep capability in AI-powered reporting, scenario modeling, insight generation, and intelligent automation. Proven success in change management, stakeholder engagement, and global rollout of strategic programs. Excellent communication and influences.

Posted 1 day ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

Remote

Every day, tens of millions of people come to Roblox to explore, create, play, learn, and connect with friends in 3D immersive digital experiences– all created by our global community of developers and creators. At Roblox, we’re building the tools and platform that empower our community to bring any experience that they can imagine to life. Our vision is to reimagine the way people come together, from anywhere in the world, and on any device. We’re on a mission to connect a billion people with optimism and civility, and looking for amazing talent to help us get there. A career at Roblox means you’ll be working to shape the future of human interaction, solving unique technical challenges at scale, and helping to create safer, more civil shared experiences for everyone. Roblox Operating System (ROS) is our internal productivity platform that governs how Roblox operates as a company. Through an integrated suite of tools, ROS shapes how we make talent and personnel decisions, plan and organize work, discover knowledge, and scale efficiently. We are seeking a Senior Data Engineer to enhance our data posture and architecture, synchronizing data across vital third-party systems like Workday, Greenhouse, GSuite, and JIRA, as well as our internal Roblox OS application database. Our Roblox OS app suite encompasses internal tools and third-party applications for People Operations, Talent Acquisition, Budgeting, Roadmapping, and Business Analytics. We envision an integrated platform that streamlines processes while providing employees and leaders with the information they need to support the business. This is a new team in our Roblox India location, working closely with data scientists & analysts, product & engineering, and other stakeholders in India & US. You will report to the Engineering Manager of the Roblox OS Team in your local location and collaborate with Roblox internal teams globally. Work Model : This role is based in Gurugram and follows a hybrid structure — 3 days from the office (Tuesday, Wednesday & Thursday) and 2 days work from home. Shift Time : 2:00pm - 10:30pm IST (Cabs will be provided) You Will Design and Build Scalable Data Pipelines: Architect, develop, and maintain robust, scalable data pipelines using orchestration frameworks like Airflow to synchronize data between internal systems. Implement and Optimize ETL Processes: Apply strong understanding of ETL (Extract, Transform, Load) processes and best practices for seamless data integration and transformation. Develop Data Solutions with SQL: Utilize your proficiency in SQL and relational databases (e.g., PostgreSQL) for advanced querying, data modeling, and optimizing data solutions. Contribute to Data Architecture: Actively participate in data architecture and implementation discussions, ensuring data integrity and efficient data transposition. Manage and optimize data infrastructure, including database, cloud storage solutions, and API endpoints. Write High-Quality Code: Focus on developing clear, readable, testable, modular, and well-monitored code for data manipulation, automation, and software development with a strong emphasis on data integrity. Troubleshoot and Optimize Performance: Apply excellent analytical and problem-solving skills to diagnose data issues and optimize pipeline performance. Collaborate Cross-Functionally: Work effectively with cross-functional teams, including data scientists, analysts, and business stakeholders, to translate business needs into technical data solutions. Ensure Data Governance and Security: Implement data anonymization and pseudonymization techniques to protect sensitive data, and contribute to master data management (MDM) concepts including data quality, lineage, and governance frameworks. You Have Data Engineering Expertise: At least 6+ Proven experience designing, building, and maintaining scalable data pipelines, coupled with a strong understanding of ETL processes and best practices for data integration. Database and Data Warehousing Proficiency: Deep proficiency in SQL and relational databases (e.g., PostgreSQL), and familiarity with at least one cloud-based data warehouse solution (e.g., Snowflake, Redshift, BigQuery). Technical Acumen: Strong scripting skills for data manipulation and automation. Familiarity with data streaming platforms (e.g., Kafka, Kinesis), and knowledge of containerization (e.g., Docker) and cloud infrastructure (e.g., AWS, Azure, GCP) for deploying and managing data solutions. Data & Cloud Infrastructure Management: Experience with managing and optimizing data infrastructure, including database, cloud storage solutions, and configuring API endpoints. Software Development Experience: Experience in software development with a focus on data integrity and transposition, and a commitment to writing clear, readable, testable, modular, and well-monitored code. Problem-Solving & Collaboration Skills: Excellent analytical and problem-solving abilities to troubleshoot complex data issues, combined with strong communication and collaboration skills to work effectively across teams. Passion for Data: A genuine passion for working with amounts of data from various sources, understanding the critical impact of data quality on company strategy at an executive level. Adaptability: Ability to thrive and deliver results in a fast-paced environment with competing priorities. Roles that are based in an office are onsite Tuesday, Wednesday, and Thursday, with optional presence on Monday and Friday (unless otherwise noted). Roblox provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws. Roblox also provides reasonable accommodations for all candidates during the interview process.

Posted 1 day ago

Apply

8.0 - 12.0 years

2 - 9 Lacs

Hyderābād

On-site

Job description Some careers shine brighter than others If you’re looking for a career that will help you stand out, join HSBC, and fulfil your potential Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further HSBC is one of the largest banking and financial services organizations in the world, with operations in 64 countries and territories We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realize their ambitions We are currently seeking an experienced professional to join our team in the role of Consultant Specialist 8 - 12 years of experience with below requirements and skills: Advanced SQL Development: Write complex SQL queries for data extraction, transformation, and analysis. Optimize SQL queries for performance and scalability. SQL Tuning and Joins: Analyze and improve query performance. Deep understanding of joins, indexing, and query execution plans. GCP BigQuery and GCS: Work with Google BigQuery for data warehousing and analytics. Manage and integrate data using Google Cloud Storage (GCS). Airflow DAG Development: Design, develop, and maintain workflows using Apache Airflow. Write custom DAGs to automate data pipelines and processes. Python Programming: Develop and maintain Python scripts for data processing and automation. Debug and optimize Python code for performance and reliability. Shell Scripting: Write and debug basic shell scripts for automation and system tasks. Continuous Learning: Stay updated with the latest tools and technologies in data engineering. Demonstrate a strong ability and attitude to learn and adapt quickly. Communication: Collaborate effectively with cross-functional teams. Clearly communicate technical concepts to both technical and non-technical stakeholders. Requirements To be successful in this role, you should meet the following requirements: Advanced SQL writing and query optimization. Strong understanding of SQL tuning, joins, and indexing. Hands-on experience with GCP services, especially BigQuery and GCS. Proficiency in Python programming and debugging. Experience with Apache Airflow and DAG development. Basic knowledge of shell scripting. Excellent problem-solving skills and a growth mindset. Strong verbal and written communication skills. Experience with data pipeline orchestration and ETL processes. Familiarity with other GCP services like Dataflow or Pub/Sub. Knowledge of CI/CD pipelines and version control (e.g., Git). You’ll achieve more when you join HSBC www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website Issued by – HSBC Software Development India

Posted 1 day ago

Apply

7.0 years

0 Lacs

Hyderābād

On-site

Job Description Overview As a key member of the team, you will be responsible for designing, building, and maintaining the data pipelines and platforms that support analytics, machine learning, and business intelligence. You will lead a team of data engineers and collaborate closely with cross-functional stakeholders to ensure that data is accessible, reliable, secure, and optimized for AI-driven applications Responsibilities Architect and implement scalable data solutions to support LLM training, fine-tuning, and inference workflows. Lead the development of ETL/ELT pipelines for structured and unstructured data across diverse sources. Ensure data quality, governance, and compliance with industry standards and regulations. Collaborate with Data Scientists, MLOps, and product teams to align data infrastructure with GenAI product goals. Mentor and guide a team of data engineers, promoting best practices in data engineering and DevOps. Optimize data workflows for performance, cost-efficiency, and scalability in cloud environments. Drive innovation by evaluating and integrating modern data tools and platforms (e.g., Databricks, Azure etc) Qualifications Bachelor’s or Master’s degree in Computer Science, Data Engineering, Information Systems, or a related technical field. 7+ years of experience in data engineering, with at least 2+ years in a leadership or senior role. Proven experience designing and managing data platforms and pipelines in cloud environments (Azure, AWS, or GCP). Experience supporting AI/ML workloads, especially involving Large Language Models (LLMs) Strong proficiency in SQL and Python Hands-on experience with data orchestration tools

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies