Jobs
Interviews

14428 Orchestration Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We seek an experienced Principal Data Scientist to lead our data science team and drive innovation in machine learning, advanced analytics, and Generative AI. This role blends strategic leadership with deep technical expertise across ML engineering, LLMs, deep learning, and multi-agent systems. You will be at the forefront of deploying AI-driven solutions, including agentic frameworks and LLM orchestration, to tackle complex, real-world problems at scale. Primary Stack: Languages: Python, SQL Cloud Platforms: AWS or GCP preferred ML & Deep Learning: PyTorch, TensorFlow, Scikit-learn GenAI & LLM Toolkits: Hugging Face, LangChain, OpenAI APIs, Cohere, Anthropic Agentic & Orchestration Frameworks: LangGraph, CrewAI, Agno, Autogen, AutoGPT Vector Stores & Retrieval: FAISS, Pinecone, Weaviate MLOps & Deployment: MLflow, SageMaker, Vertex AI, Kubeflow, Docker, Kubernetes, Fast API Key Responsibilities: Lead and mentor a team of 10+ data scientists and ML engineers, promoting a culture of innovation, ownership, and cross-functional collaboration. Drive the development, deployment, and scaling of advanced machine learning, deep learning, and GenAI applications across the business. Build and implement agentic architectures and multi-agent systems using tools like LangGraph, CrewAI, and Agno to solve dynamic workflows and enhance LLM reasoning capabilities. Architect intelligent agents capable of autonomous planning, decision-making, tool use, and collaboration. Leverage LLMs and transformer-based models to power solutions in NLP, conversational AI, information retrieval, and decision support. Develop and scale ML pipelines on cloud platforms, ensuring performance, reliability, and reproducibility. Establish and maintain MLOps processes (CI/CD for ML, monitoring, governance) and ensure best practices in responsible AI. Collaborate with product, engineering, and business teams to align AI initiatives with strategic goals. Stay ahead of the curve on AI/ML trends, particularly in the multi-agent and agentic systems landscape, and advocate for their responsible adoption. Present results, insights, and roadmaps to senior leadership and non-technical stakeholders in a clear, concise manner. Qualifications: 9+ years of experience in data science, business analytics, or ML engineering, with 3+ years in a leadership or principal role. Demonstrated experience in architecting and deploying LLM-based solutions in production environments. Deep understanding of deep learning, transformers, and modern NLP. Proven hands-on experience building multi-agent systems using LangGraph, CrewAI, Agno, or related tools. Strong grasp of agent design principles, including memory management, planning, tool selection, and self-reflection loops. Expertise in cloud-based ML platforms (e.g., AWS SageMaker, GCP Vertex AI) and MLOps best practices. Familiarity with retrieval-augmented generation (RAG) and vector databases (e.g., FAISS, Pinecone). Excellent communication, stakeholder engagement, and cross-functional leadership skills.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team is comprised of many talented individuals all working together with cutting-edge technology to build the best airline in the history of aviation. Our team designs, develops and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Job Overview And Responsibilities United Airlines is seeking talented people to join the Data Engineering Operations team. Key responsibilities include configuring and managing infrastructure, implementing continuous integration/continuous deployment (CI/CD) pipelines, and optimizing system performance. You will work to improve efficiency, enhance scalability, and ensure the reliability of systems through monitoring and proactive measures. Collaboration, scripting, and proficiency in tools for version control and automation are critical skills for success in this role. We are seeking creative, driven, detail-oriented individuals who enjoy tackling tough problems with data and insights. Individuals who have a natural curiosity and desire to solve problems are encouraged to apply . Collaboration, scripting, and proficiency in tools for version control and automation are critical skills for success in this role. Translate product strategy and requirements into suitable, maintainable and scalable solution design according to existing architecture guardrails Collaborate with development and operations teams to understand project requirements and design effective DevOps solutions Implement and maintain CI/CD pipelines for automated software builds, testing, and deployment Manage and optimize cloud-based infrastructure to ensure scalability, security, and performance Implement and maintain monitoring and alerting systems for proactive issue resolution Work closely with cross-functional teams to troubleshoot and resolve infrastructure-related issues Automate repetitive tasks and processes to improve efficiency and reduce manual intervention Key Responsibilities Design, deploy, and maintain cloud infrastructure on AWS. Set up and manage Kubernetes clusters for container orchestration. Design, implement, and manage scalable, secure, and highly available AWS infrastructure using Terraform. Develop and manage Infrastructure as Code (IaC) modules and reusable components. Collaborate with developers, architects, and other DevOps engineers to design cloud-native applications and deployment strategies. Manage and optimize CI/CD pipelines using tools like GitHub Actions, GitLab CI, Jenkins, or similar. Manage and optimize Databricks platform. Monitor infrastructure health and performance using AWS CloudWatch, Prometheus, Grafana, etc. Ensure cloud security best practices, including IAM policies, VPC configurations, data encryption, and secrets management. Create and manage networking infrastructure such as VPCs, subnets, security groups, route tables, NAT gateways, etc. Handle deployment and configuration of services such as EC2, RDS, Glue, S3, ECS/EKS, Lambda, API Gateway, Kinesis, MWAA, DynamoDB, CloudFront, Route 53, SQS,SNS, Athena, ELB/ALB. Maintain logging, alerting, and monitoring systems to ensure reliability and performance. This position is offered on local terms and conditions. Expatriate assignments and sponsorship for employment visas, even on a time-limited visa status, will not be awarded. This position is for United Airlines Business Services Pvt. Ltd - a wholly owned subsidiary of United Airlines Inc. Qualifications What’s needed to succeed (Minimum Qualifications): Bachelor's degree in Computer Science, Engineering, or related field 5+ years of IT experience in Experience as a DevOps Engineer or in a similar role. Experience with AWS infrastructure designs, implementation, and support Proficiency in scripting languages (e.g., Bash, Python) and configuration management tools Experience with database systems like Postgress, Redshift, Mysql. Must be legally authorized to work in India for any employer without sponsorship Must be fluent in English (written and spoken) Successful completion of interview required to meet job qualification Reliable, punctual attendance is an essential function of the position What will help you propel from the pack (Preferred Qualifications): Master’s in computer science or related STEM field Strong experience with continuous integration & delivery using Agile methodologies DevOps experience with transportation/airline industry Knowledge of security best practices in a DevOps environment Experience with logging and monitoring tools (e.g., Dynatrace / Datadog ) Strong problem-solving and communication skills Experience with Harness tools Experience with microservices architecture and serverless applications. Knowledge of database technologies (PostgreSQL, Redshift,Mysql). Knowledge of security best practices in a DevOps environment AWS certifications (e.g., AWS Certified Solutions Architect, AWS Certified Developer). Databricks Platform certifications.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Achieving our goals starts with supporting yours. Grow your career, access top-tier health and wellness benefits, build lasting connections with your team and our customers, and travel the world using our extensive route network. Come join us to create what’s next. Let’s define tomorrow, together. Description United's Digital Technology team designs, develops, and maintains massively scaling technology solutions brought to life with innovative architectures, data analytics, and digital solutions. Our Values: At United Airlines, we believe that inclusion propels innovation and is the foundation of all that we do. Our Shared Purpose: "Connecting people. Uniting the world." drives us to be the best airline for our employees, customers, and everyone we serve, and we can only do that with a truly diverse and inclusive workforce. Our team spans the globe and is made up of diverse individuals all working together with cutting-edge technology to build the best airline in the history of aviation. With multiple employee-run "Business Resource Group" communities and world-class benefits like health insurance, parental leave, and space available travel, United is truly a one-of-a-kind place to work that will make you feel welcome and accepted. Come join our team and help us make a positive impact on the world. Job Overview And Responsibilities Our Digital Operations Center is constantly working to enhance the experience of our customers across our Digital Channels, based on data-driven analytics and timely and accurate reports. We are seeking a Senior Developer with deep expertise in building and maintaining cloud-native data platforms and pipelines using AWS and modern development practices. The ideal candidate is a hands-on engineer with experience in serverless compute, streaming data architectures, and DevOps automation, who thrives in a collaborative, fast-paced environment. This role will be instrumental in designing high-performance, scalable systems leveraging AWS services and the Well-Architected Framework. Design, build, and maintain scalable and efficient ETL/ELT pipelines using tools such as AWS Fargate, S3, Kinesis, and Flink or custom scripts. Integrate data from various sources including APIs, cloud services, databases, and flat files into a centralized data warehouse (e.g., Postgres, BigQuery, Redshift). Design and optimize SQL database schemas and queries, primarily on Amazon Aurora (MySQL/PostgreSQL), ensuring high performance and data integrity across workloads. Monitor, troubleshoot, and resolve data pipeline and infrastructure issues. Build CI/CD pipelines using GitHub Actions, driving automation in deployment and testing. Apply best practices for monitoring, alerting, cost optimization, and security — in line with AWS’s Well-Architected Framework. Collaborate with cross-functional teams including product, analytics, and DevOps to design end-to-end solutions. United Airlines is an equal opportunity employer. United Airlines recruits, employs, trains, compensates, and promotes regardless of race, religion, color, national origin, gender identity, sexual orientation, physical ability, age, veteran status, and other protected status as required by applicable law. Required Bachelor's degree in Computer Science, Computer Engineering, Electrical Engineering, Management Information Systems or related field. 2–5 years of experience in data engineering or a similar role. Strong SQL skills and experience working with relational databases (e.g., PostgreSQL, MySQL). Experience with cloud platforms like AWS, GCP, or Azure (e.g., Fargate, S3, Lambda, Aurora, Redis) Familiarity with data modeling and building data warehouses/lakes. Experience designing real-time or near-real-time data streaming pipelines. Proficiency in programming languages like Python or Scala. CI/CD knowledge, particularly using GitHub Actions or similar tools. Solid understanding of performance tuning, cost-effective cloud resource management, and data architecture principles. Understanding of data governance, quality, and security best practices. Must be legally authorized to work in India for any employer without sponsorship Qualifications Preferred MBA preferred Data Engineer :- AWS certifications (e.g., Developer Associate, Solutions Architect). Knowledge of modern data stack tools like dbt, Fivetran, Snowflake, or Looker. Exposure to containerization and orchestration tools (e.g., Docker, Kubernetes). Experience with data lake architectures or hybrid transactional/analytical processing systems. Familiarity with Agile development practices and cloud-native observability tools. Experience working with Quantum Metric and/or Dynatrace a plus

Posted 4 days ago

Apply

0 years

0 Lacs

Bhubaneswar, Odisha, India

Remote

Company Description PractoMind Technologies specializes in building robust fintech solutions and streamlining integrations between fintechs and banking infrastructure. We collaborate extensively with fintech companies to overcome integration challenges with banks, accelerating their product development cycles. Our solutions, including API orchestration and compliance-ready architectures, ensure seamless connectivity and faster go-to-market strategies. PractoMind offers financial inclusion infrastructure development, rural banking solutions, transactional banking services, and API integration. We are dedicated to creating scalable, secure, and agile infrastructure to empower fintechs and enhance value for our clients. Role Description This is a full-time hybrid role for an Enterprise Sales Consultant, based in Bhubaneswar with some work from home acceptable. The Enterprise Sales Consultant will engage in day-to-day sales activities, consulting clients on suitable fintech solutions, and addressing their integration challenges. The role involves maintaining excellent customer service and communication while analyzing market needs to provide tailored solutions. You will be responsible for building and maintaining client relationships, presenting product demonstrations, and driving sales growth. Qualifications Analytical Skills and Consulting experience Strong Communication and Customer Service abilities Proven Sales skills and experience Excellent interpersonal and relationship-building skills Ability to work in a hybrid work environment Experience in fintech or related industries is a plus Strong understanding of banking infrastructure and API integrations Bachelor's degree in Business, Finance, or related field

Posted 4 days ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Software Engineer Consultant/Expert – GCP Data Engineer Location: Chennai (Onsite) 34350 Employment Type: Contract Budget: Up to ₹18 LPA Assessment: Google Cloud Platform Engineer (HackerRank or equivalent) Notice Period: Immediate Joiners Preferred Role Summary We are seeking a highly skilled GCP Data Engineer to support the modernization of enterprise data platforms. The ideal candidate will be responsible for designing and implementing scalable, high-performance data pipelines and solutions on Google Cloud Platform (GCP) . You will work with large-scale datasets, integrating legacy and modern systems to enable advanced analytics and AI/ML capabilities. The role requires a deep understanding of GCP services, strong data engineering skills, and the ability to collaborate across teams to deliver robust data solutions. Key Responsibilities Design and develop production-grade data engineering solutions using GCP services such as: BigQuery, Dataflow, Dataform, Dataproc, Cloud Composer, Cloud SQL, Airflow, Compute Engine, Cloud Functions, Cloud Run, Cloud Build, Pub/Sub, App Engine Develop batch and real-time streaming pipelines for data ingestion, transformation, and processing. Integrate data from multiple sources including legacy and cloud-based systems. Collaborate with stakeholders and product teams to gather data requirements and align technical solutions to business needs. Conduct in-depth data analysis and impact assessments for data migrations and transformations. Implement CI/CD pipelines using tools like Tekton, Terraform, and GitHub. Optimize data workflows for performance, scalability, and cost-effectiveness. Lead and mentor junior engineers; contribute to knowledge sharing and documentation. Champion data governance, data quality, security, and compliance best practices. Utilize monitoring/logging tools to proactively address system issues. Deliver high-quality code using Agile methodologies including TDD and pair programming. Required Skills & Experience GCP Data Engineer Certification. Minimum 5+ years of experience designing and implementing complex data pipelines. 3+ years of hands-on experience with GCP. Strong expertise in: SQL, Python, Java, or Apache Beam Airflow, Dataflow, Dataproc, Dataform, Data Fusion, BigQuery, Cloud SQL, Pub/Sub Infrastructure-as-Code tools such as Terraform DevOps tools: GitHub, Tekton, Docker Solid understanding of microservice architecture, CI/CD integration, and container orchestration. Experience with data security, governance, and compliance in cloud environments. Preferred Qualifications Experience with real-time data streaming using Apache Kafka or Pub/Sub. Exposure to AI/ML tools or integration with AI/ML pipelines. Working knowledge of data science principles applied on large datasets. Experience in a regulated domain (e.g., financial services or insurance). Experience with project management and agile tools (e.g., JIRA, Confluence). Strong analytical and problem-solving mindset. Effective communication skills and ability to collaborate with cross-functional teams. Education Required: Bachelor's degree in Computer Science, Engineering, or a related technical field. Preferred: Master's degree or certifications in relevant domains. Skills: github,bigquery,airflow,ml,pub/sub,terraform,python,apache beam,dataflow,gcp,gcp data engineer certification,tekton,java,dataform,docker,data fusion,sql,dataproc,cloud sql,cloud

Posted 4 days ago

Apply

0.0 - 2.0 years

3 - 10 Lacs

Niranjanpur, Indore, Madhya Pradesh

Remote

Job Title - Sr. Data Engineer Experience - 2+ Years Location - Indpre (onsite) Industry - IT Job Type - Full ime Roles and Responsibilities- 1. Design and develop scalable data pipelines and workflows for data ingestion, transformation, and integration. 2. Build and maintain data storage systems, including data warehouses, data lakes, and relational databases. 3. Ensure data accuracy, integrity, and consistency through validation and quality assurance processes. 4. Collaborate with data scientists, analysts, and business teams to understand data needs and deliver tailored solutions. 5. Optimize database performance and manage large-scale datasets for efficient processing. 6. Leverage cloud platforms (AWS, Azure, or GCP) and big data technologies (Hadoop, Spark, Kafka) for building robust data solutions. 7. Automate and monitor data workflows using orchestration frameworks such as Apache Airflow. 8. Implement and enforce data governance policies to ensure compliance and data security. 9. Troubleshoot and resolve data-related issues to maintain seamless operations. 10. Stay updated on emerging tools, technologies, and trends in data engineering. Skills and Knowledge- 1. Core Skills: ● Proficient in Python (libraries: Pandas, NumPy) and SQL. ● Knowledge of data modeling techniques, including: ○ Entity-Relationship (ER) Diagrams ○ Dimensional Modeling ○ Data Normalization ● Familiarity with ETL processes and tools like: ○ Azure Data Factory (ADF) ○ SSIS (SQL Server Integration Services) 2. Cloud Expertise: ● AWS Services: Glue, Redshift, Lambda, EKS, RDS, Athena ● Azure Services: Databricks, Key Vault, ADLS Gen2, ADF, Azure SQL ● Snowflake 3. Big Data and Workflow Automation: ● Hands-on experience with big data technologies like Hadoop, Spark, and Kafka. ● Experience with workflow automation tools like Apache Airflow (or similar). Qualifications and Requirements- ● Education: ○ Bachelor’s degree (or equivalent) in Computer Science, Information Technology, Engineering, or a related field. ● Experience: ○ Freshers with strong understanding, internships and relevant academic projects are welcome. ○ 2+ years of experience working with Python, SQL, and data integration or visualization tools is preferred. ● Other Skills: ○ Strong communication skills, especially the ability to explain technical concepts to non-technical stakeholders. ○ Ability to work in a dynamic, research-oriented team with concurrent projects. Job Types: Full-time, Permanent Pay: ₹300,000.00 - ₹1,000,000.00 per year Benefits: Paid sick time Provident Fund Work from home Schedule: Day shift Monday to Friday Weekend availability Supplemental Pay: Performance bonus Ability to commute/relocate: Niranjanpur, Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Experience: Data Engineer: 2 years (Preferred) Work Location: In person Application Deadline: 31/08/2025

Posted 4 days ago

Apply

4.5 - 6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Roles & Responsibilities Key Responsibilities: Develop robust, scalable Python-based applications aligned with company requirements. Integrate and implement Generative AI models into business applications. Design, build, and maintain data pipelines and data engineering solutions on Azure. Collaborate closely with cross-functional teams (data scientists, product managers, data engineers, and cloud architects) to define, design, and deploy innovative AI and data solutions. Build, test, and optimize AI pipelines, ensuring seamless integration with Azure-based data systems. Continuously research and evaluate new AI and Azure data technologies and trends to enhance system capabilities. Participate actively in code reviews, troubleshooting, debugging, and documentation. Ensure high standards of code quality, performance, security, and reliability. Required Skills Advanced proficiency in Python programming, including libraries and frameworks like Django, Flask, FastAPI. Experience in Generative AI technologies (e.g., GPT models, LangChain, Hugging Face). Solid expertise in Azure Data Engineering tools such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Data Lake Storage. Familiarity with AI/ML libraries such as TensorFlow, PyTorch, or OpenAI API. Experience with RESTful APIs, microservices architecture, and web application development. Strong understanding of databases (SQL, NoSQL) and ETL processes. Good knowledge of containerization and orchestration technologies like Docker, Kubernetes. Strong problem-solving, analytical, and debugging skills. Preferred Qualifications Bachelor's/master’s degree in computer science, Engineering, or related fields. Prior experience developing AI-enabled products or implementing AI into applications. Azure certifications (AZ-204, DP-203, AI-102) or equivalent. Exposure to DevOps practices and CI/CD pipelines, especially in Azure DevOps. Soft Skills Strong communication and teamwork skills. Ability to work independently and proactively. Passion for continuous learning and professional growth. Location: Gurgaon, Noida, Pune, Bengaluru, Kochi Experience 4.5-6 Years Skills Primary Skill: Data Engineering Sub Skill(s): Data Engineering Additional Skill(s): Python, Azure Data Factory, Python-Django About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru.

Posted 4 days ago

Apply

6.0 - 8.0 years

0 Lacs

Gurugram, Haryana, India

Remote

Experience: 6-8 years Location: Remote Domain: Creator Economy, AI Video Tools 🔧Responsibilities: Develop, test, and deploy LLM-driven agents to automate content strategy, scriptwriting, and moderation Implement Agentic workflows using frameworks like LangChain, LangGraph, or CrewAI Design real-time script analyzers to flag and intercept deepfake/misuse risks Integrate voice cloning, avatar generation APIs, and safety intercept logic Build structured conversation memory, action plans, and tool calling agents for video planning Collaborate with backend team to expose agent actions via API ✅ Requirements: Strong Python skills with experience in AI orchestration frameworks Hands-on with OpenAI, Anthropic, LLama, or Mistral APIs Experience with vector DBs (e.g. FAISS, Weaviate) for content memory Deep understanding of prompt engineering, function calling, RAG, or agent autonomy Awareness of deepfake, safety, or fairness risks in generative AI Good to have: Experience with HuggingFace, LangGraph, or Guardrails AI

Posted 4 days ago

Apply

4.0 years

0 Lacs

Gurugram, Haryana, India

On-site

About the Company: ● Founded in 2019, zingbus is building the most trusted brand for intercity travel. Keeping reliability and safety at the core, we are building a two-sided platform that delivers standardized journey experience for travelers and increased earnings for our supply partners. ● We connect 300+ cities across the country through our daily services and have served 2.5 Mn+ unique customers so far and are aggressively working towards the fleet electrification and establishment of charging technology and infrastructure. ● Raised Series A from Y Combinator, InfoEdge, AdvantEdge, and other prominent investors from India and Silicon Valley. Additionally, secured a significant investment of $9 million from bp Ventures. https://yourstory.com/2021/03/gurugram-startup-zingbus-takes-tech-route-intercity-bus-travel-smoother/amp https://www.bp.com/en_in/india/home/news/press/bp-ventures-invests-9-million-in-indias-leading-intercity-bus platform-zingbus.html We’re looking for a CRM Lead with 4-5 years of hands-on experience in building and managing CRM systems. This role will be critical in driving user acquisition, engagement, retention, and reactivation through personalized user journeys and high-impact CRM campaigns. CRM Strategy & Lifecycle Ownership Build and execute CRM strategies across the entire user lifecycle—from onboarding to churn prevention and reactivation. Develop frameworks for journey orchestration across web, app, and messaging channels. Lifecycle Campaigns & Journeys Design and manage high-impact automated journeys (e.g., welcome, inactivity, upsell, win-back, feedback, etc.). Continuously optimize flows using A/B testing and cohort-level analysis. Multi-Channel Campaign Management Plan and execute personalized, data-driven campaigns across Push Notifications, Email, SMS, and WhatsApp. Own campaign calendars, creative briefing, execution, and post-campaign analysis. Segmentation & Personalization Leverage behavioral and transactional data to build micro-segments. Drive personalization at scale based on user actions, platform preferences, and engagement patterns. Tool & Tech Ownership Manage and optimize usage of CRM tools like CleverTap, MoEngage, WebEngage, etc. Define events, triggers, attributes, and user cohorts for automation and analysis. Cross-Functional Collaboration Work closely with Product, Design, Growth, Analytics, and Engineering teams to align CRM strategies with overall business objectives. Reporting & Analytics Monitor key lifecycle metrics like DAUs, retention, CTR, conversions, etc. Translate data into actionable insights to improve campaign performance and user experience. In-Product Messaging Collaborate with Product and Design to enhance in-app touchpoints and convert them into CRM opportunities (e.g., nudges, modals, banners). 4-5 years of hands-on CRM or lifecycle marketing experience in a fast-paced consumer tech environment (preferably travel, mobility, or e-commerce). Deep understanding of user lifecycle stages, funnel analytics, and retention metrics. Strong command over CRM platforms like CleverTap, MoEngage, WebEngage, or similar. A data-driven approach—able to extract insights and translate them into impactful actions. Creative thinking with a growth mindset—someone who enjoys ideating and experimenting with campaigns. Strong communication and collaboration skills, with the ability to manage stakeholders across teams.

Posted 4 days ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Software Engineer – Senior (Full Stack Backend – Java) Location: Chennai (Onsite) Employment Type: Contract Budget: Up to ₹22 LPA 34347 Assessment: Full Stack Backend – Java (via HackerRank or equivalent platform) Notice Period: Immediate Joiners Preferred Role Overview We are seeking a highly skilled Senior Software Engineer with expertise in backend development, microservices architecture, and cloud-native technologies. The selected candidate will be part of a collaborative product team responsible for developing and deploying REST APIs and microservices for digital platforms. The role involves working in a fast-paced agile environment, contributing to both engineering excellence and product innovation. Key Responsibilities Design, develop, test, and deploy high-quality, scalable backend systems and APIs. Collaborate with cross-functional teams including product managers, designers, and QA engineers to deliver customer-centric solutions. Write clean, maintainable, and well-documented code following industry best practices. Participate in pair programming, code reviews, and test-driven development. Contribute to defining architecture and service-level objectives. Conduct proof-of-concepts for new capabilities and features. Drive continuous improvement in code quality, testing, and deployment processes. Required Skills 7+ years of hands-on experience in software engineering with a focus on backend development or full-stack engineering. Strong expertise in Java and microservices architecture. Solid understanding and working knowledge of: Google Cloud Platform (GCP) services including BigQuery, Dataflow, Dataproc, Data Fusion, Cloud SQL, and Airflow. Infrastructure as Code (IaC) tools like Terraform. CI/CD tools such as Tekton. Databases: PostgreSQL, Cloud SQL. Programming/scripting: Python, PySpark. Building and consuming RESTful APIs. Preferred Qualifications Experience with containerization and orchestration tools. Familiarity with monitoring tools and service-level indicators (SLIs/SLAs). Exposure to agile frameworks like Extreme Programming (XP), Scrum, or Kanban. Education Required: Bachelor's degree in Computer Science, Engineering, or a related technical discipline. Skills: restful apis,pyspark,tekton,data fusion,bigquery,cloud sql,microservices architecture,microservices,software,terraform,postgresql,dataflow,code,cloud,dataproc,google cloud platform (gcp),ci/cd,airflow,python,java

Posted 4 days ago

Apply

2.0 years

0 Lacs

Greater Kolkata Area

On-site

Job Description Capgemini’s Connected Marketing Operations practice offers and delivers Marketing Operations services to its top fortune 500 clients. Our portfolio of services is focused on delivering latest and best in Content Operations, Campaign Services and Performance Marketing solutions to drive marketing and sales outcomes for the clients. We are looking for a results-oriented senior leader to lead the global delivery & client relationship management for multiple projects. If you are driven by hyper growth challenge and love to wow the clients with your innovative solutions, then this is just the right leadership role for you! Primary Skills The role responsibilities include: Responsible for delivery excellence of all programs and accounts rolling up to the practice through strong governance and review mechanism. Continual Innovation aimed at creating future proof solutions for the marketing functions with a focus on industrialization, delivery process standardization and reuse across the marketing operations & digital marketing scope. Develop use cases in the generative AI and other technologies prevalent for marketing process optimization. Accurately forecast revenue, head count, profitability, margins, bill rates and utilization. Ensure attention to demand prediction and fulfilment across the MU Represent Capgemini in client steering committee meetings. Build strong executive connects to enable management of client expectations and foster lasting client relationships. Continually seek opportunities to increase customer satisfaction and deepen client relationships. Work closely to ensure that the operational parameters are green. Work closely & collaborate with Practice/ Global Account Managers/AE/BDE in a collaborative manner to grow the business across various Industry verticals and the market units and ensure the delivery function runs efficiently. Identify business development and "add-on" sales opportunities in existing programs. While the primary function will be development and delivery of programs within the MU, he/she will also have the responsibility to look ahead into the next 2-3 years and ensure that a strategic road map is in place for the future. This will be done in conjunction with AE/BDEs/Sales Leaders. Secondary Skills Our Ideal Candidate He/She/They OR, the incumbent will have 18+ years’ experience with a large marketing shared services or marketing service provider with a strong project track record. Minimum 18 years’ experience in delivery management comprising of engagements for global clients in Marketing Operations areas – Artwork Management, Media and Creative, Advertising Operations, Marketing Asset Management, Product Data Orchestration, Innovation Project management Experience in managing big P&Ls for operations/delivery for international clients Demonstrated ability to influence without formal authority within cross-functional teams on adopting new ways of working. Previous experience successfully leading large delivery teams (400+) of marketing specialists with a strong focus on talent management. Good understanding of the latest tech and platforms in marketing domains including GenAI Previous experience with leading delivery in a recognized agency will be an added advantage. Exceptional communication skills Experience with international clients mandatory Working experience with cross cultural teams spread across India, Latin America and European centers is required.

Posted 4 days ago

Apply

15.0 - 20.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description: About Us At Bank of America, we are guided by a common purpose to help make financial lives better through the power of every connection. Responsible Growth is how we run our company and how we deliver for our clients, teammates, communities, and shareholders every day. One of the keys to driving Responsible Growth is being a great place to work for our teammates around the world. We’re devoted to being a diverse and inclusive workplace for everyone. We hire individuals with a broad range of backgrounds and experiences and invest heavily in our teammates and their families by offering competitive benefits to support their physical, emotional, and financial well-being. Bank of America believes both in the importance of working together and offering flexibility to our employees. We use a multi-faceted approach for flexibility, depending on the various roles in our organization. Working at Bank of America will give you a great career with opportunities to learn, grow and make an impact, along with the power to make a difference. Join us! Global Business Services Global Business Services delivers Technology and Operations capabilities to Lines of Business and Staff Support Functions of Bank of America through a centrally managed, globally integrated delivery model and globally resilient operations. Global Business Services is recognized for flawless execution, sound risk management, operational resiliency, operational excellence, and innovation. In India, we are present in five locations and operate as BA Continuum India Private Limited (BACI), a non-banking subsidiary of Bank of America Corporation and the operating company for India operations of Global Business Services. Process Overview: Reference Data Technology is responsible for the strategy, sourcing, maintenance, and distribution of Reference Data across the Bank. It is also responsible for Global Markets Client Onboarding, Reg W, and FMU Reporting. Reference Data comprises 3 main categories - Client, Instrument and Book. Reference Data Technology is a provider of data for Front to Back Flows, Enterprise Supply Chain, Risk, Banking, GWIM and Compliance. PME and Bookmap are the firm's Authorized Data Sources for Instrument and Book data. Cesium is the System of Record for Client data in Global Markets. Some of the data domains include but are not limited to: Client Counterparty: Organizations, Individuals, Prospects, Contacts Client Accounts & SSIs: Cash, Derivatives, Processing Book: trading Books, Subledgers, Volcker classifications Instruments: Listed Products, Cleared Products, EOD Pricing, Holiday Calendars Job Description Generative AI (GenAI) presents an exciting opportunity to derive valuable insights from data and drive revenue growth, efficiencies, and improved business processes. Technology will collaborate with Global Markets Sales & Trading, Quantitative Strategies & Data Group (QSDG) & Platform teams to the design and buildout its global GenAI platform. The platform will cater to a rapidly growing number of use cases that harness the power of GenAI. Both proprietary and open-source Large Language Models, and large structured and un-structured data sets will be leveraged to produce insights for Global Markets and its clients. We are seeking a Software Engineer to build this platform. In this role, you will ensure that software is developed to meet functional, non-functional and compliance requirements, and solutions are well designed with maintainability/ease of integration and testing built-in from the outset. Hands-on engagement in the full software lifecycle activities is expected. This includes requirements analysis, architecture design, coding, testing, and deployment. Job expectations include a strong knowledge of development and testing practices common to the industry and design and architectural patterns. Responsibilities: Hands on people and project manager role. Responsible for managing a mid-sized team and also oversee their deliverables on day-to-day basis Expected to contribute in his/her own individual capacity to software development and design and code reviews Design, develop, and modify architecture components, application interfaces, and solution enablers while ensuring principal architecture integrity is maintained. Mentor other software engineers and coach team on Continuous Integration and Continuous Development (CI-CD) practices and automating tool stack. Code solutions and implement automated unit tests to deliver a requirement/story per the defined acceptance criteria and compliance requirements. Help evaluate and execute a proof of concept as necessary to implement new ideas or mitigate risk. Design, develop, and maintain automated test suites (integration, regression, performance). Ensure solution meets product acceptance criteria with minimal technical debt. Troubleshoot build and setup failures and facilitate resolution. Ensure execution and delivery meets technology’s expectations in terms of the functionality, quality, performance, reliability, and timeline. Communicate status frequently to technology partners. Requirements: Education: BE / BTech / MTech / MCA / MSc Certifications (if any): NA Experience Range: 15 to 20 years in similar roles. Preferably in the financial industry Foundational Skills: Some exposure to manage mid-sized teams Hands-on experience in AI/Gen AI system design, implementation, and scaling, with expertise in large language models (LLMs) and AI frameworks. Expertise in Advanced of Python Development and full-stack technologies. Proven ability to architect enterprise-scale solutions Hands-on experience in application development in one or more areas MongoDB, Redis, React Framework, Impala, Autosys, FAST API services, Containerization. Working in large sized teams that collaboratively develop on a shared multi-repo codebase using IDEs (e.g. VS Code rather than Jupyter Notebooks), Continuous Integration (CI), Continuous Deployment (CD) and Continuous Testing Hands-on DevOps experience with one or more of the following enterprise development tools: Version Control (GIT/Bitbucket), Build Orchestration (Jenkins), Code Quality (SonarQube and pytest Unit Testing), Artifact Management (Artifactory) and Deployment (Ansible) Experience with agile development methodologies and building supportability into applications Excellent analytical and problem-solving skills. Experience with developing frameworks and tools specific to AI/ML applications. Familiarity with cloud platforms and development in cloud environments. Ability to communicate clearly and effectively to a wide range of audience (business stakeholders, developer & support teams). Self-starter. Able to break down complex problems into smaller problems, manage dependencies, and efficiently drive through to a solution Detail oriented & highly organized. Adaptable to shifting & competing priorities. Desired Skills: Experience in Global markets domain Work Timings : 11:30am to 8:30pm IST Job Location: Chennai

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Description As a Data Engineer on the Data and AI team, you will design and implement robust data pipelines and infrastructure that power our organization's data-driven decisions and AI capabilities. This role is critical in developing and maintaining our enterprise-scale data processing systems that handle high-volume transactions while ensuring data security, privacy compliance, and optimal performance. You'll be part of a dynamic team that designs and implements comprehensive data solutions, from real-time processing architectures to secure storage solutions and privacy-compliant data access layers. The role involves close collaboration with cross-functional teams, including software development engineers, product managers, and scientists, to create data products that power critical business capabilities. You'll have the opportunity to work with leading technologies in cloud computing, big data processing, and machine learning infrastructure, while contributing to the development of robust data governance frameworks. If you're passionate about solving complex technical challenges in high-scale environments, thrive in a collaborative team setting, and want to make a lasting impact on our organization's data infrastructure, this role offers an exciting opportunity to shape the future of our data and AI capabilities. Key job responsibilities Design and implement ETL/ELT frameworks that handle large-scale data operations, while building reusable components for data ingestion, transformation, and orchestration while ensuring data quality and reliability. Establish and maintain robust data governance standards by implementing comprehensive security controls, access management frameworks, and privacy-compliant architectures that safeguard sensitive information. Drive the implementation of data solutions, both real-time and batch, optimizing them for both analytical workloads and AI/ML applications. Lead technical design reviews and provide mentorship on data engineering best practices, identifying opportunities for architectural improvements and guiding the implementation of enhanced solutions. Build data quality frameworks with robust monitoring systems and validation processes to ensure data accuracy and reliability throughout the data lifecycle. Drive continuous improvement initiatives by evaluating and implementing new technologies and methodologies that enhance data infrastructure capabilities and operational efficiency. A day in the life The day often begins with a team stand-up to align priorities, followed by a review of data pipeline monitoring alarms to address any processing issues and ensure data quality standards are maintained across systems. Throughout the day, you'll find yourself immersed in various technical tasks, including developing and optimizing ETL/ELT processes, implementing data governance controls, and reviewing code for data processing systems. You'll work closely with software engineers, scientists, and product managers, participating in technical design discussions and sharing your expertise in data architecture and engineering best practices. Your responsibilities extend to communicating with non-technical stakeholders, explaining data-related projects and their business impact. You'll also mentor junior engineers and contribute to maintaining comprehensive technical documentation. You'll troubleshoot issues that arise in the data infrastructure, optimize the performance of data pipelines, and ensure data security and compliance with relevant regulations. Staying updated on the latest data engineering technologies and best practices is crucial, as you'll be expected to incorporate new learnings into your work. By the end of a typical day, you'll have advanced key data infrastructure initiatives, solved complex technical challenges, and improved the reliability, efficiency, and security of data systems. Whether it's implementing new data governance controls, optimizing data processing workflows, or enhancing data platforms to support new AI models, your work directly impacts the organization's ability to leverage data for critical business decisions and AI capabilities. If you are not sure that every qualification on the list above describes you exactly, we'd still love to hear from you! At Amazon, we value people with unique backgrounds, experiences, and skillsets. If you’re passionate about this role and want to make an impact on a global scale, please apply! About The Team The Data and Artificial Intelligence (AI) team is a new function within Customer Engagement Technology. We own the end-to-end process of defining, building, implementing, and monitoring a comprehensive data strategy. We also develop and apply Generative Artificial Intelligence (GenAI), Machine Learning (ML), Ontology, and Natural Language Processing (NLP) to customer and associate experiences. Basic Qualifications 3+ years of data engineering experience Bachelor’s degree in Computer Science, Engineering, or a related technical discipline Preferred Qualifications Experience with AWS data services (Redshift, S3, Glue, EMR, Kinesis, Lambda, RDS) and understanding of IAM security frameworks Proficiency in designing and implementing logical data models that drive physical designs Hands-on experience working with large language models, including understanding of data infrastructure requirements for AI model training Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Dev Center India - Hyderabad Job ID: A2996966

Posted 4 days ago

Apply

5.0 years

0 Lacs

India

On-site

At 32Co, we’re on a mission to revolutionise healthcare by empowering generalist clinicians to deliver specialist care - improving the standard of care for millions of people. We are transforming the clear aligner industry, enabling generalist dentists to offer specialist orthodontic treatments with confidence. With a proven model and strong traction, we have ambitious plans to expand into other healthcare verticals - bringing the same level of innovation and accessibility to new areas of medicine. We’re an award-winning HealthTech company tackling a multi-billion dollar problem in the healthcare industry. Backed by top-tier investors behind Revolut, City Mapper, and Depop, we’re rapidly expanding our operations across the UK and beyond. With a strong foundation and growing momentum, now is the perfect time to join us and make a meaningful impact on global healthcare. The Role We are looking for a Senior DevOps Engineer with 5+ years of hands-on experience to join our growing fast-paced engineering team. You’ll play a key role in designing, implementing, and maintaining scalable, secure, and highly available infrastructure. You’ll work closely with developers, QA, and other stakeholders to streamline CI/CD processes, improve system reliability, and lead infrastructure automation. If you are passionate about improving healthcare through technology, and thrive in a fast-paced environment, we would love to hear from you. Key Responsibilities Design, implement, and manage cloud infrastructure (AWS/Azure/GCP). Automate infrastructure provisioning using tools like Terraform, Ansible, or CloudFormation. Develop and maintain CI/CD pipelines (GitLab CI, GitHub Actions, Jenkins, etc.). Monitor and troubleshoot system reliability, performance, and security. Set up and manage container orchestration platforms (Kubernetes, ECS, etc.). Ensure high availability, scalability, and disaster recovery planning. Implement security best practices across environments. Mentor junior team members and lead DevOps initiatives. Collaborate with development and QA teams to optimise workflows and deployment cycles. Work closely with the support team to help on any technical issues. Required Skills & Qualifications 5+ years of experience in a DevOps or Site Reliability Engineering role. Strong experience with cloud providers (AWS/GCP/Azure). Proficiency with IaC tools like Terraform, Ansible , or CloudFormation. Experience with Docker and container orchestration platforms like Kubernetes. Solid understanding of CI/CD tools such as GitLab CI, GitHub Actions, Jenkins, or ArgoCD. Hands-on experience with Linux-based systems. Strong scripting skills (Bash, Python, or Go). Experience with monitoring and logging tools (Prometheus, Grafana, ELK, Datadog, etc.). Knowledge of networking, DNS, CDN, firewalls, load balancers. Understanding of security, compliance, and governance in cloud environments. Nice to Have Certifications: AWS Certified DevOps Engineer, CKA/CKAD, etc. Experience with serverless architectures. Exposure to GitOps principles and tools like Flux or ArgoCD. Experience in fast-paced startup or product environments. Why Join Us Opportunity to work on a health technology product solving a multi-billion dollar problem. Be part of a collaborative and inclusive work environment with a culture of teamwork, innovation and creativity. Have access to the latest tools and technologies. Opportunities for professional growth and career advancement. Join a company seed-funded by top VCs with a multicultural, ambitious, high calibre team.

Posted 4 days ago

Apply

5.0 years

0 Lacs

Goa

On-site

This position is responsible for leading the development and maintenance of client-facing and internal applications primarily built using Python. The ideal candidate should have strong experience with backend development, a solid understanding of front-end technologies, and hands-on experience with deployment, environment configuration, testing, and debugging processes. You should be confident working across the full software development lifecycle and capable of mentoring other team members when needed. Responsibilities Design, develop, and maintain web, console applications, RESTful APIs using Python Participate in requirements gathering and contribute to technical design discussions Write clean, efficient, reusable, and scalable code using Python frameworks such as Django or Flask Refactor and debug code to improve application performance and maintainability Identify bottlenecks and bugs and implement effective solutions Write and Update unit tests Deploy applications in development, staging, and production environments Create and maintain technical documentation throughout the software development lifecycle (SDLC) Collaborate with QA teams to ensure high performance, quality, and responsiveness of applications Mentor and guide junior team members on best practices, domain knowledge, and technology Review code to ensure quality, maintainability, performance, and compliance with requirements Stay up to date with client tech stacks and continuously explore new technologies relevant to the product or domain Use AI tools where applicable to improve the development life cycle Monitor production applications for consistency and performance The ideal candidate for the position should have the following skills and experience: Technical Qualifications Experience with Python and frameworks like Django and/or Flask Experience in designing and developing RESTful APIs Knowledge of RDBMS (e.g., PostgreSQL, MySQL) and operating system concepts Experience with API testing tools such as Postman or JMeter Understanding of cloud platforms (AWS, Azure, or GCP) and commonly used cloud services in application development Familiarity with containerization tools such as Docker and orchestration platforms like Kubernetes Familiarity with front-end technologies such as HTML, CSS, and modern JavaScript frameworks like Angular or React Familiarity with Gen AI implementation, LLMs, and RAG systems Personal Skills Ability to analyze problems and develop practical, scalable solutions Ability to communicate clearly and effectively with technical and non-technical stakeholders Ability to work independently with minimal supervision and collaborate as part of a team Ability to quickly learn and adapt to new technologies, tools, and frameworks Ability to manage time effectively across multiple tasks and meet deadlines Ability to maintain strong attention to detail in both coding and documentation Ability to demonstrate a professional attitude, strong work ethic, and a desire for continuous improvement Education and Work Experience Bachelor's degree in Computer Science, Information Technology, or a related technical discipline is preferred Minimum of 5 years of relevant experience in Python-based development

Posted 4 days ago

Apply

3.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics D&A – SSIS- Senior We’re looking for Informatica or SSIS Engineers with Cloud Background (AWS, Azure) Primary skills: Has played key roles in multiple large global transformation programs on business process management Experience in database query using SQL Should have experience working on building/integrating data into a data warehouse. Experience in data profiling and reconciliation Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Developed expertise in complex data management or Application integration solution and deployment in areas of data migration, data integration, application integration or data quality. Experience in data processing, orchestration, parallelization, transformations and ETL Fundamentals. Leverages on variety of programming languages & data crawling/processing tools to ensure data reliability, quality & efficiency (optional) Experience in Cloud Data-related tool (Microsoft Azure, Amazon S3 or Data lake) Knowledge on Cloud infrastructure and knowledge on Talend cloud is an added advantage Knowledge of data modelling principles. Knowledge in Autosys scheduling Good experience in database technologies. Good knowledge in Unix system Responsibilities: Need to work as a team member to contribute in various technical streams of Data integration projects. Provide product and design level technical best practices Interface and communicate with the onsite coordinators Completion of assigned tasks on time and regular status reporting to the lead Building a quality culture Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Qualification: BE/BTech/MCA (must) with an industry experience of 3 -7 years. Experience in Talend jobs, joblets and customer components. Should have knowledge of error handling and performance tuning in Talend. Experience in big data technologies such as sqoop, Impala, hive, Yarn, Spark etc. Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Atleast experience of minimum 3-4 clients for short duration projects ranging between 6-8 + months OR Experience of minimum 2+ clients for duration of projects ranging between 1-2 years or more than that People with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 4 days ago

Apply

6.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Data Integration Specialist – Senior The opportunity We are seeking a talented and experienced Integration Specialist with 3–6 years of experience to join our growing Digital Integration team. The ideal candidate will play a pivotal role in designing, building, and deploying scalable and secure solutions that support business transformation, system integration, and automation initiatives across the enterprise. Your Key Responsibilities Work with clients to assess existing integration landscapes and recommend modernization strategies using MuleSoft. Translate business requirements into technical designs, reusable APIs, and integration patterns. Develop, deploy, and manage MuleSoft APIs and integrations on Anypoint Platform (CloudHub, Runtime Fabric, Hybrid). Collaborate with business and IT stakeholders to define integration standards, SLAs, and governance models. Implement error handling, logging, monitoring, and alerting using Anypoint Monitoring and third-party tools. Maintain integration artifacts and documentation, including RAML specifications, flow diagrams, and interface contracts. Ensure performance tuning, scalability, and security best practices are followed across integration solutions. Support CI/CD pipelines, version control, and DevOps processes for MuleSoft assets using platforms like Azure DevOps or GitLab. Collaborate with cross-functional teams (Salesforce, SAP, Data, Cloud, etc.) to deliver end-to-end connected solutions. Stay current with MuleSoft platform capabilities and industry integration trends to recommend improvements and innovations. Troubleshoot integration issues and perform root cause analysis in production and non-production environments. Contribute to internal knowledge-sharing, technical mentoring, and process optimization. Strong SQL, data integration and handling skills Exposure to AI Models ,Python and using them in Data Cleaning/Standardization. To qualify for the role, you must have 3–6 years of hands-on experience with MuleSoft Anypoint Platform and Anypoint Studio Strong experience with API-led connectivity and reusable API design (System, Process, Experience layers). Proficient in DataWeave transformations, flow orchestration, and integration best practices. Experience with API lifecycle management including design, development, publishing, governance, and monitoring. Solid understanding of integration patterns (synchronous, asynchronous, event-driven, batch). Hands-on experience with security policies, OAuth, JWT, client ID enforcement, and TLS. Experience in working with cloud platforms (Azure, AWS, or GCP) in the context of integration projects. Knowledge of performance tuning, capacity planning, and error handling in MuleSoft integrations. Experience in DevOps practices including CI/CD pipelines, Git branching strategies, and automated deployments. Experience in data intelligence cloud platforms like Snowflake, Azure, data bricks Ideally, you’ll also have MuleSoft Certified Developer or Integration Architect certification. Exposure to monitoring and logging tools (e.g., Splunk, Elastic, Anypoint Monitoring). Strong communication and interpersonal skills to work with technical and non-technical stakeholders. Ability to document integration requirements, user stories, and API contracts clearly and concisely. Experience in agile environments and comfort working across multiple concurrent projects. Ability to mentor junior developers and contribute to reusable component libraries and coding standards. What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career. The freedom and flexibility to handle your role in a way that’s right for you. EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.

Posted 4 days ago

Apply

8.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. EY-Consulting - Data and Analytics – Manager - Data Integration Architect – Medidata Platform Integration EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services that leverage deep industry experience with strong functional and technical capabilities and product knowledge. EY’s financial services practice provides integrated Consulting services to financial institutions and other capital markets participants, including commercial banks, retail banks, investment banks, broker-dealers & asset management firms, and insurance firms from leading Fortune 500 Companies. Within EY’s Consulting Practice, Data and Analytics team solves big, complex issues and capitalize on opportunities to deliver better working outcomes that help expand and safeguard the businesses, now and in the future. This way we help create a compelling business case for embedding the right analytical practice at the heart of client’s decision-making. The opportunity We’re looking for an experienced Data Integration Architect with 8+ years in clinical or life sciences domains to lead the integration of Medidata platforms into enterprise clinical trial systems. This role offers the chance to design scalable, compliant data integration solutions, collaborate across global R&D systems, and contribute to data-driven innovation in the healthcare and life sciences space. You will play a key role in aligning integration efforts with organizational architecture and compliance standards while engaging with stakeholders to ensure successful project delivery. Your Key Responsibilities Design and implement scalable integration solutions for large-scale clinical trial systems involving Medidata platforms. Ensure integration solutions comply with regulatory standards such as GxP and CSV. Establish and maintain seamless system-to-system data exchange using middleware platforms (e.g., Apache Kafka, Informatica) or direct API interactions. Collaborate with cross-functional business and IT teams to gather integration requirements and translate them into technical specifications. Align integration strategies with enterprise architecture and data governance frameworks. Provide support to program management through data analysis, integration status reporting, and risk assessment contributions. Interface with global stakeholders to ensure smooth integration delivery and resolve technical challenges. Mentor junior team members and contribute to knowledge sharing and internal learning initiatives. Participate in architectural reviews and provide recommendations for continuous improvement and innovation in integration approaches. Support business development efforts by contributing to solution proposals, proof of concepts (POCs), and client presentations. Skills And Attributes For Success Use a solution-driven approach to design and implement compliant integration strategies for clinical data platforms like Medidata. Strong communication, stakeholder engagement, and documentation skills, with experience presenting complex integration concepts clearly. Proven ability to manage system-to-system data flows using APIs or middleware, ensuring alignment with enterprise architecture and regulatory standards To qualify for the role, you must have Experience: Minimum 8 years in data integration or architecture roles, with a strong preference for experience in clinical research or life sciences domains. Education: Must be a graduate preferrable BE/B.Tech/BCA/Bsc IT Technical Skills: Hands-on expertise in one or more integration platforms such as Apache Kafka, Informatica, or similar middleware technologies; experience in implementing API-based integrations. Domain Knowledge: In-depth understanding of clinical trial data workflows, integration strategies, and regulatory frameworks including GxP and CSV compliance. Soft Skills: Strong analytical thinking, effective communication, and stakeholder management skills with the ability to collaborate across business and technical teams. Additional Attributes: Ability to work independently in a fast-paced environment, lead integration initiatives, and contribute to solution design and architecture discussions. Ideally, you’ll also have Hands-on experience with ETL tools and clinical data pipeline orchestration frameworks. Familiarity with broader clinical R&D platforms such as Oracle Clinical, RAVE, or other EDC systems. Prior experience leading small integration teams and working directly with cross-functional stakeholders in regulated environments What We Look For A Team of people with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment An opportunity to be a part of market-leading, multi-disciplinary team of 1400 + professionals, in the only integrated global transaction business worldwide. Opportunities to work with EY Consulting practices globally with leading businesses across a range of industries What working at EY offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you About EY As a global leader in assurance, tax, transaction and Consulting services, we’re using the finance products, expertise and systems we’ve developed to build a better working world. That starts with a culture that believes in giving you the training, opportunities and creative freedom to make things better. Whenever you join, however long you stay, the exceptional EY experience lasts a lifetime. And with a commitment to hiring and developing the most passionate people, we’ll make our ambition to be the best employer by 2020 a reality. If you can confidently demonstrate that you meet the criteria above, please contact us as soon as possible. Join us in building a better working world. Apply now EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.

Posted 4 days ago

Apply

0.0 - 1.0 years

1 - 2 Lacs

Cochin

On-site

We are looking for a skilled Junior DevOps Engineer to join our team and help us streamline our development and deployment processes. In this role, you will work closely with software developers, IT operations, and system administrators to build and maintain scalable infrastructure, automate deployment pipelines, and ensure the reliability and efficiency of our systems. You will play a key role in implementing best practices for continuous integration and continuous deployment (CI/CD), monitoring, and cloud services. Experience: 0-1 years as a DevOps Engineer Location : Kochi,Infopark Phase II Immediate Joiners Preferred Key Responsibility Area Exposure to version control systems such as Git, SVN (Subversion), and Mercurial foundational tools. Experience in CI/CD tools like Jenkins, Travis CI, CircleCI, and GitLab CI/CD Proficiency in configuration management tools such as Ansible, Puppet, Chef, and SaltStack Knowledge in containerization platforms such as Docker and container orchestration tools like Kubernetes Exposure to Infrastructure as Code (IaC) Tools like Terraform, AWS CloudFormation, Azure Resource Manager, and Google Cloud Deployment Manager Experience in Monitoring and logging solutions such as Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Splunk, and Datadog. Knowledge of collaboration and communication platforms such as Slack, and Atlassian Jira. Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as a DevOps Engineer or in a similar role. Job Types: Full-time, Permanent Pay: ₹15,000.00 - ₹20,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Performance bonus Yearly bonus Application Question(s): are u willing to relocate to Kochi? Whats your notice period? Work Location: In person

Posted 4 days ago

Apply

3.0 years

5 - 10 Lacs

Kazhakuttam

On-site

About the Role You will architect, build and maintain end-to-end data pipelines that ingest 100 GB+ of NGINX/web-server logs from Elasticsearch, transform them into high-quality features, and surface actionable insights and visualisations for security analysts and ML models. Acting as both a Data Engineer and a Behavioural Data Analyst, you will collaborate with security, AI and frontend teams to ensure low-latency data delivery, rich feature sets and compelling dashboards that spot anomalies in real time. Key Responsibilities ETL & Pipeline Engineering: Design and orchestrate scalable batch / near-real-time ETL workflows to extract raw logs from Elasticsearch. Clean, normalize and partition logs for long-term storage and fast retrieval. Optimize Elasticsearch indices, queries and retention policies for performance and cost. Feature Engineering & Feature Store: Assist in the development of robust feature-engineering code in Python and/or PySpark. Define schemas and loaders for a feature store (Feast or similar). Manage historical back-fills and real-time feature look-ups ensuring versioning and reproducibility. Behaviour & Anomaly Analysis: Perform exploratory data analysis (EDA) to uncover traffic patterns, bursts, outliers and security events across IPs, headers, user agents and geo data. Translate findings into new or refined ML features and anomaly indicators. Visualisation & Dashboards: Create time-series, geo-distribution and behaviour-pattern visualisations for internal dashboards. Partner with frontend engineers to test UI requirements. Monitoring & Scaling: Implement health and latency monitoring for pipelines; automate alerts and failure recovery. Scale infrastructure to support rapidly growing log volumes. Collaboration & Documentation: Work closely with ML, security and product teams to align data strategy with platform goals. Document data lineage, dictionaries, transformation logic and behavioural assumptions. Minimum Qualifications: Education – Bachelor’s or Master’s in Computer Science, Data Engineering, Analytics, Cybersecurity or related field. Experience – 3 + years building data pipelines and/or performing data analysis on large log datasets. Core Skills Python (pandas, numpy, elasticsearch-py, Matplotlib, plotly, seaborn; PySpark desirable) Elasticsearch & ELK stack query optimisation SQL for ad-hoc analysis Workflow orchestration (Apache Airflow, Prefect or similar) Data modelling, versioning and time-series handling Familiarity with visualisation tools (Kibana, Grafana). DevOps – Docker, Git, CI/CD best practices. Nice-to-Have Kafka, Fluentd or Logstash experience for high-throughput log streaming. Web-server log expertise (NGINX / Apache, HTTP semantics) Cloud data platform deployment on AWS / GCP / Azure. Hands-on exposure to feature stores (Feast, Tecton) and MLOps. Prior work on anomaly-detection or cybersecurity analytics systems. Why Join Us? You’ll sit at the nexus of data engineering and behavioural analytics, turning raw traffic logs into the lifeblood of a cutting-edge AI security product. If you thrive on building resilient pipelines and diving into the data to uncover hidden patterns, we’d love to meet you. Job Type: Full-time Pay: ₹500,000.00 - ₹1,000,000.00 per year Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Work Location: In person

Posted 4 days ago

Apply

1.0 - 2.0 years

4 - 8 Lacs

Thiruvananthapuram

On-site

What you’ll do Work with software development engineers to understand the overall technical architecture and how each feature is implemented Utilize creative problem-solving skills to assist in technical troubleshooting and analysis for BU reported issues in JIRA Monitor, maintain systems/applications, look for opportunities to optimize and improve systems Establishes technical proficiency in design, implementation and unit testing Respond to assigned tickets/tasks in accordance with SLA guidelines Handle customer requests, incidents and inbound calls; apply diagnostic utilities and best practice methodology to aid in troubleshooting Update technical support documentation when required and also perform post-resolution follow-ups to help requests Perform hands-on fixes at the application, including installing and upgrading software, implementing file backups, and configuring systems and applications. Escalate to Tier 3 any issues unresolved Collaborate with internal and external teams and stakeholders to drive progress across multiple action items and initiatives Work effectively in an Agile environment; provide training or demos in the new/existing features What experience you need BS or MS degree in a STEM major or equivalent job experience required Minimum 1-2 years of software engineering experience Self-starter that identifies/responds to priority shifts with minimal supervision Software testing, performance, and quality engineering techniques and strategies Cloud technology: GCP, AWS, or Azure Experience in troubleshooting, monitoring infrastructure and application uptime and availability to ensure functional and performance objectives Demonstrable cross-functional knowledge with systems, storage, networking, security and databases System administration skills, including automation and orchestration of Linux/Windows using Chef, Puppet, Ansible, Salt Stack and/or containers (Docker, Kubernetes, etc.) Cloud Certification Strongly Preferred What could set you apart 1-2 years of experience of Support Engineering Strong Communication skills Diagnose and resolve technical issues related to software, hardware, or network systems. Analyze logs, error messages, and system data to identify root causes. Provide timely and effective solutions to users or customers. Interact with users or customers through various channels (phone, email, chat, etc.). Guide users through troubleshooting steps and solutions. Escalate complex issues to appropriate teams or specialists. Maintain accurate records of support interactions and solutions. Contribute to knowledge bases and support documentation. Share technical expertise with team members and colleagues

Posted 4 days ago

Apply

4.0 years

10 - 12 Lacs

Hyderābād

On-site

Job Title: Senior Backend Developer Required Skills: CI/CD Pipeline, Kubernetes, Java or Python with Spring boot or Django/flask, Excellent Communication & Stakeholder Management. 1. 4+ years of software development experience 2. Strong experience with Kubernetes , Docker , and CI/CD pipelines in cloud-native environments. 3. Hands-on with NATS for event-driven architecture and streaming. 4. Skilled in microservices , RESTful APIs , and containerized app performance optimization. 5. Strong in problem-solving , team collaboration , clean code practices , and continuous learning . 6. Proficient in Java (Spring Boot) and Python (Flask) for building scalable applications and APIs. 7. Focus: Java, Python, Kubernetes, Cloud-native development Position Overview We are seeking a skilled Developer to join our engineering team. The ideal candidate will have strong expertise in Java and Python ecosystems, with hands-on experience in modern web technologies, messaging systems, and cloud-native development using Kubernetes. Key Responsibilities Design, develop, and maintain scalable applications using Java and Spring Boot framework Build robust web services and APIs using Python and Flask framework Implement event-driven architectures using NATS messaging server Deploy, manage, and optimize applications in Kubernetes environments Develop microservices following best practices and design patterns Collaborate with cross-functional teams to deliver high-quality software solutions Write clean, maintainable code with comprehensive documentation Participate in code reviews and contribute to technical architecture decisions Troubleshoot and optimize application performance in containerized environments Implement CI/CD pipelines and follow DevOps best practices Required Qualifications Bachelor's degree in Computer Science, Information Technology, or related field 4+ years of experience in software development Strong proficiency in Java with deep understanding of web technology stack Hands-on experience developing applications with Spring Boot framework Solid understanding of Python programming language with practical Flask framework experience Working knowledge of NATS server for messaging and streaming data Experience deploying and managing applications in Kubernetes Understanding of microservices architecture and RESTful API design Familiarity with containerization technologies (Docker) Experience with version control systems (Git) Skills & Competencies Skills Java (Spring Boot, Spring Cloud, Spring Security) Python (Flask, SQL Alchemy, REST APIs) NATS messaging patterns (pub/sub, request/reply, queue groups) Kubernetes (deployments, services, ingress, ConfigMaps, Secrets) Web technologies (HTTP, REST, WebSocket, gRPC) Container orchestration and management Soft Skills Problem-solving and analytical thinking Strong communication and collaboration Self-motivated with ability to work independently Attention to detail and code quality Continuous learning mindset Team player with mentoring capabilities Required Experience: 4 to 7 Years Education and Pass out Criteria: Bachelor's degree in Computer Science, Information Technology, or related field Compensation: 10LPA-12LPA Work Timings: 9:30 AM to 7:00PM Working Conditions: Work from Office Number of Openings: 10 Job Type: Full-time Pay: ₹1,000,000.00 - ₹1,200,000.00 per year Location Type: In-person Schedule: Day shift Work Location: In person

Posted 4 days ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

We are seeking a highly skilled and experienced Java Developer to join our engineering team. The ideal candidate will be responsible for designing, developing, and maintaining high-performance, scalable, and reliable applications. This role requires a strong understanding of core Java principles, modern frameworks like Spring Boot, microservices architecture, and cloud technologies. You will play a key role in building the next generation of our platform. Key Responsibilities: Design, develop, and implement robust and scalable backend services using Java, Spring, and Spring Boot. Build and maintain microservices-based applications, ensuring high availability, performance, and fault tolerance. Utilize Object-Oriented Programming (OOPs) principles and design patterns to write clean, reusable, and maintainable code. Develop multi-threaded applications to handle concurrent processes and optimize performance. Integrate with messaging systems like Apache Kafka for real-time data processing and asynchronous communication. Work with cloud services, primarily AWS, for deploying and managing applications (e.g., EC2, S3, RDS). Design and interact with relational databases, writing complex SQL queries and optimizing database performance. Collaborate with cross-functional teams, including product managers, designers, and other engineers, to define, design, and ship new features. Write and execute unit, integration, and end-to-end tests to ensure code quality and reliability. Participate in code reviews, providing constructive feedback to peers to maintain high coding standards. Required Skills and Qualifications: Bachelor’s degree in Computer Science, Engineering, or a related field, or equivalent practical experience. 5-8+ years of experience in software development with a strong focus on Java. Expertise in Core Java, including a deep understanding of Object-Oriented Programming (OOPs) principles and concurrent programming (Multi-threading). Extensive hands-on experience with Spring and Spring Boot frameworks. Solid experience in designing and developing microservices-based architectures. Proven experience working with messaging systems, particularly Apache Kafka. Hands-on experience with AWS services for building and deploying applications. Proficiency in database technologies, including writing efficient SQL queries. Strong understanding of version control systems (e.g., Git). Experience with build tools like Maven or Gradle. Excellent problem-solving skills and the ability to work independently or as part of a team. Strong communication and collaboration skills. Preferred Skills (Nice to Have): Experience with containers and orchestration tools like Docker and Kubernetes. Familiarity with CI/CD pipelines (e.g., Jenkins, GitLab CI). Knowledge of other databases (e.g., NoSQL databases like MongoDB). Experience with RESTful API design.

Posted 4 days ago

Apply

7.0 years

0 Lacs

Hyderābād

Remote

R020794 Hyderabad, Telangana, India Engineering Regular Location Details: Pune, Maharashtra At GoDaddy the future of work looks different for each team. Some teams work in the office full-time; others have a hybrid arrangement (they work remotely some days and in the office some days) and some work entirely remotely. This is a hybrid position. You’ll divide your time between working remotely from your home and an office, so you should live within commuting distance. Hybrid teams may work in-office as much as a few times a week or as little as once a month or quarter, as decided by leadership. The hiring manager can share more about what hybrid work might look like for this team. Join our team The Customer Engagement Data team within the larger Customer and Site organization is a growing platform team dedicated to the data used in our campaign and messaging platforms. This data must be timely, accurate, and as up-to-date as possible, as it drives real-time messaging and notifications, along with outbound communications (email and SMS, etc) to our customers. This includes messaging and notifications that enhance the customer experience, drive engagement, and support business objectives. This position will have complete ownership of our Customer Engagement Platform Data Pipeline and integration with both internal and external systems. Our data platform must have top-notch observability, with end-to-end system monitoring, platform availability, and reporting across the entire pipeline (for example, visibility of data feeds to and from sources and data source-specific SLAs). If you have a solid technical foundation and are passionate about data platform architecture, we’d love to have you join us in our mission to build scalable data, marketing and messaging platforms to meet the needs of our teams who communicate with our ~21 million customers. What you'll get to do… As a software engineer focused on Marketing and Customer Engagement at GoDaddy, you will have the opportunity to design, build, and maintain a platform that is a keystone to our customer experience, marketing, and business objectives. Everything we do starts with data. Ensure our team continues with a “Shift Left” focus on security. This includes the design and development of systems that can contain sensitive customer information. You will partner closely and collaborate with other GoDaddy teams of Engineers, Marketing Professionals, QA and Operations teams. Leverage industry best practices and methodologies such as Agile, Scrum, testing automation and Continuous Integration and Deployment. Your experience should include… 7+ years in software engineering, with 4+ years using AWS. Programming languages: C# and Python, along with SQL and Spark. The engineering position requires a minimum three-hour overlap with team members in the US-Pacific time zone. Strong experience with some (or all) of the following: Lambda and Step functions, API Gateway, Fargate, ECS, S3, SQS, Kinesis, Firehose, DynamoDB, RDS, Athena, and Glue. Solid foundation in data structures and algorithms and in-depth knowledge and passion for coding standards and following proven design patterns. RESTful and GraphQL APIs are examples. You might also have... DevOps experience is a plus, GitHub, GitHub Actions, Docker. Experience building CI/CD and server/deployment automation solutions, and container orchestration technologies. We've got your back... We offer a range of total rewards that may include paid time off, retirement savings (e.g., 401k, pension schemes), bonus/incentive eligibility, equity grants, participation in our employee stock purchase plan, competitive health benefits, and other family-friendly benefits including parental leave. GoDaddy’s benefits vary based on individual role and location and can be reviewed in more detail during the interview process. We also embrace our diverse culture and offer a range of Employee Resource Groups (Culture). Have a side hustle? No problem. We love entrepreneurs! Most importantly, come as you are and make your own way. About us... GoDaddy is empowering everyday entrepreneurs around the world by providing the help and tools to succeed online, making opportunity more inclusive for all. GoDaddy is the place people come to name their idea, build a professional website, attract customers, sell their products and services, and manage their work. Our mission is to give our customers the tools, insights, and people to transform their ideas and personal initiative into success. To learn more about the company, visit About Us. At GoDaddy, we know diverse teams build better products—period. Our people and culture reflect and celebrate that sense of diversity and inclusion in ideas, experiences and perspectives. But we also know that’s not enough to build true equity and belonging in our communities. That’s why we prioritize integrating diversity, equity, inclusion and belonging principles into the core of how we work every day—focusing not only on our employee experience, but also our customer experience and operations. It’s the best way to serve our mission of empowering entrepreneurs everywhere, and making opportunity more inclusive for all. To read more about these commitments, as well as our representation and pay equity data, check out our Diversity and Pay Parity annual report which can be found on our Diversity Careers page. GoDaddy is proud to be an equal opportunity employer . GoDaddy will consider for employment qualified applicants with criminal histories in a manner consistent with local and federal requirements. Refer to our full EEO policy. Our recruiting team is available to assist you in completing your application. If they could be helpful, please reach out to myrecruiter@godaddy.com. GoDaddy doesn’t accept unsolicited resumes from recruiters or employment agencies.

Posted 4 days ago

Apply

5.0 years

7 - 9 Lacs

Hyderābād

On-site

Location: Hyderabad, Telangana Time type: Full time Job level: Senior Associate Job type: Regular Category: Technology Consulting ID: JR111910 About us We are the leading provider of professional services to the middle market globally, our purpose is to instill confidence in a world of change, empowering our clients and people to realize their full potential. Our exceptional people are the key to our unrivaled, inclusive culture and talent experience and our ability to be compelling to our clients. You’ll find an environment that inspires and empowers you to thrive both personally and professionally. There’s no one like you and that’s why there’s nowhere like RSM. Snowflake Engineer We are currently seeking an experienced Snowflake Engineer for our Data Analytics team. This role involves designing, building, and maintaining our Snowflake cloud data warehouse. Candidates should have strong Snowflake, SQL, and cloud data solutions experience. Responsibilities Design, develop, and maintain efficient and scalable data pipelines in Snowflake, encompassing data ingestion, transformation, and loading (ETL/ELT). Implement and manage Snowflake security, including role-based access control, network policies, and data encryption. Develop and maintain data models optimized for analytical reporting and business intelligence. Collaborate with data analysts, scientists, and stakeholders to understand data requirements and translate them into technical solutions. Monitor and troubleshoot Snowflake performance, identifying and resolving bottlenecks. Automate data engineering processes using scripting languages (e.g., Python, SQL) and orchestration tools (e.g., Airflow, dbt). Designing, developing, and deploying APIs within Snowflake using stored procedures and user-defined functions (UDFs) Lead and mentor a team of data engineers and analysts, providing technical guidance, coaching, and professional development opportunities. Stay current with the latest Snowflake features and best practices. Contribute to the development of data engineering standards and best practices. Document data pipelines, data models, and other technical specifications. Qualifications Bachelor’s degree or higher in computer science, Information Technology, or a related field. A minimum of 5 years of experience in data engineering and management, including over 3 years of working with Snowflake. Strong understanding of data warehousing concepts, including dimensional modeling, star schemas, and snowflake schemas. Proficiency in SQL and experience with data transformation and manipulation. Experience with ETL/ELT tools and processes. Experience with Apache Iceberg. Strong analytical and problem-solving skills. Excellent communication and collaboration skills. Preferred qualifications Snowflake certifications (e.g., SnowPro Core Certification). Experience with scripting languages (e.g., Python) and automation tools (e.g., Airflow, dbt). Experience with cloud platforms (e.g., AWS, Azure, GCP). Experience with data visualization tools (e.g., Tableau, Power BI). Experience with Agile development methodologies. Experience with Snowflake Cortex, including Cortex Analyst, Arctic TILT, and Snowflake AI & ML Studio. At RSM, we offer a competitive benefits and compensation package for all our people. We offer flexibility in your schedule, empowering you to balance life’s demands, while also maintaining your ability to serve clients. Learn more about our total rewards at https://rsmus.com/careers/india.html. RSM does not tolerate discrimination and/or harassment based on race; colour; creed; sincerely held religious beliefs, practices or observances; sex (including pregnancy or disabilities related to nursing); gender (including gender identity and/or gender expression); sexual orientation; HIV Status; national origin; ancestry; familial or marital status; age; physical or mental disability; citizenship; political affiliation; medical condition (including family and medical leave); domestic violence victim status; past, current or prospective service in the Indian Armed Forces; Indian Armed Forces Veterans, and Indian Armed Forces Personnel status; pre-disposing genetic characteristics or any other characteristic protected under applicable provincial employment legislation. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process and/or employment/partnership. RSM is committed to providing equal opportunity and reasonable accommodation for people with disabilities. If you require a reasonable accommodation to complete an application, interview, or otherwise participate in the recruiting process, please send us an email at careers@rsmus.com.

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies