Jobs
Interviews

14240 Orchestration Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

What you'll do Manage system(s) uptime across cloud-native (AWS, GCP) and hybrid architectures. Build infrastructure as code (IAC) patterns that meet security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Build CI/CD pipelines for build, test and deployment of application and cloud architecture patterns, using platform (Jenkins) and cloud-native toolchains. Build automated tooling to deploy service requests to push a change into production. Build runbooks that are comprehensive and detailed to manage detect, remediate and restore services. Solve problems and triage complex distributed architecture service maps. On call for high severity application incidents and improving run books to improve MTTR Lead availability blameless postmortem and own the call to action to remediate recurrences. What experience you need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 5-7 years of experience in software engineering, systems administration, database administration, and networking 2+ years of experience developing and/or administering software in public cloud Cloud Certification Strongly Preferred Proficiency with continuous integration and continuous delivery tooling and practices System administration skills, including automation and orchestration of Linux/Windows using Terraform, Chef, Ansible and/or containers (Docker, Kubernetes, etc.) Demonstrable cross-functional knowledge with systems, storage, networking, security and databases Experience in languages such as Python, Bash, Java, Go JavaScript and/or node.js Experience in monitoring infrastructure and application uptime and availability to ensure functional and performance objectives What could set you apart You have expertise designing, analyzing and troubleshooting large-scale distributed systems. You take a system problem-solving approach, coupled with strong communication skills and a sense of ownership and drive Kubernetes (CKA, CKAD) or cloud certifications. You are passionate for automation with a desire to eliminate toil whenever possible You’ve built software or maintained systems in a highly secure, regulated or compliant industry You thrive in and have experience and passion for working within a DevOps culture and as part of a team BS in Computer Science or related field. 2+ years of experience developing and/or administering software in public cloud 5+ years of programming experience (Python, Bash/Shell Script, Java, Go, etc.). 3+ years of experience monitoring infrastructure and application performance. 5+ years experience of system administration skills, including automation and orchestration of Linux/Windows using Terraform, Chef, Ansible and/or containers (Docker, Kubernetes, etc.) 5+ years experience working with continuous integration and continuous delivery tooling and practices Kubernetes: Design, deploy, and manage production-ready Kubernetes clusters. Cloud Infrastructure: Build and maintain scalable infrastructure on GCP using tools like Terraform. Performance: Identify and resolve performance bottlenecks in applications and infrastructure. Observability: Implement monitoring and logging to proactively detect and resolve issues. Incident Response: Participate in on-call rotations, troubleshooting and resolving production incidents. Collaboration: Promote reliability best practices and ensure smooth deployments. Automation: Build CI/CD pipelines, automated tooling, and runbooks. Problem Solving: Triage complex issues, lead blameless postmortems, and drive remediation. Mentorship: Guide and mentor other SREs.

Posted 19 hours ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

What you’ll do? Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need? Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in Java & SQL 2+ years experience with Cloud technology: GCP, AWS, or Azure 2+ years experience designing and developing cloud-native solutions 2+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 3+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart? Knowledge or experience with Apache Beam for stream and batch data processing. Familiarity with big data tools and technologies like Apache Kafka, Hadoop, or Spark. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to data visualization tools or platforms.

Posted 19 hours ago

Apply

0 years

0 Lacs

Saket, Delhi, India

On-site

Roles and Responsibilities: ● Design, develop, and maintain critical software in a fast-paced quality-conscious environment ● Quickly understand complex systems/code and own key pieces of the system, including the delivered quality ● Diagnose and troubleshoot complex problems in a distributed computing environment ● Work alongside other Engineers and cross functional teams to diagnose/troubleshoot any production performance related issues ● Work in Python, Shell and built systems on Docker ● Defining and setting development, test, release, update, and support processes for DevOps operation ● Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management ● Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline) ● Managing periodic reporting on the progress to the management and the customer Skills: ● Familiarity with scripting languages: python, shell scripting ● Proper understanding of networking and security protocols (HTTPS, SSL, Certs) ● Experience in building containers and container orchestration applications (K8S/ECS/ Docker) ● Experience working on Linux based infrastructure, GIT, CI/CD Tools, Jenkins, Terraform ● Configuration and managing databases such as MySQL, PostgreSQL, Mongo ● Working knowledge of various tools, open-source technologies, and cloud services (AWS preferably)

Posted 19 hours ago

Apply

2.0 years

0 Lacs

Bhilai, Chhattisgarh, India

On-site

Job Summary: We are seeking an experienced DevOps Engineer with a strong background in deploying and managing AI applications on Azure. The ideal candidate should have experience in deploying AI systems, understands AI Agentic architectures, and can optimize and manage LLM-based applications in production environments. Key Responsibilities: Deploy, scale, and monitor AI applications on Microsoft Azure (AKS, Azure Functions, App Services, etc.). Build and optimize AI Agentic systems for robust and efficient performance. Implement CI/CD pipelines for seamless updates and deployments. Manage containerized services using Docker/Kubernetes. Monitor infrastructure cost, performance, and uptime. Collaborate with AI engineers to understand application requirements and support smooth deployment. Ensure compliance with data security and privacy standards. Requirements: 2+ years of experience in deploying and managing AI/ML applications. Proficiency in Azure cloud services and DevOps practices. Familiarity with LLM-based systems, LangChain, Vector DBs, and Python. Experience with containerization tools (Docker) and orchestration (Kubernetes). Understanding of AI system architecture, including Agentic workflows. Strong problem-solving and optimization skills. Preferred Qualifications: Experience with Gemini, OpenAI, Anthropic, or Hugging Face APIs. Familiarity with LangChain, LlamaIndex, or ChromaDB. Prior experience in managing high-availability, secure, and cost-optimized AI deployments.

Posted 19 hours ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Senior Engineer - Cloud Operations (Platform Support)As a Cloud Operations Engineer in our Cloud Operations Center, you will be a key player in ensuring the 24x7x365 smooth operation of Saviynt’s Enterprise Identity Cloud. This role focuses on maintaining the stability, performance, and reliability of our platform with a strong emphasis on application layer support and operational ownership. You will be working closely with other operations team members, development, and engineering to resolve issues, implement improvements, and provide exceptional support. This is an opportunity for someone who enjoys operational challenges and problem-solving in a dynamic cloud environment and wants to see their work through to completion. WHAT YOU WILL BE DOING · Strong pod-level troubleshooting skills in AKS/EKS (not just restarting pods). · Analyze application and DB (RDS, MySQL) performance issues.Deeply investigate and analyze application performance issues (Java, Grails, Hibernate), identifying root causes and implementing solutions. · Oversee the monitoring of our SaaS applications and underlying infrastructure (Kubernetes on AWS and Azure, VPN connections, customer applications, Elastic Search, MySQL) for alerts and performance issues. · Strong understanding of basic computing concepts like DNS, IP addressing, Networking, and LDAP. · Effectively participate and contribute in on-call escalations with a strong operational mindset and provide technical guidance during critical incidents. · Proactively communicate with customers on technical issues when required. · Ability to guide junior engineers when needed technically. · Manage the full lifecycle of alerts, incidents, and service requests reported through FreshService, ensuring timely and accurate logging, prioritization, resolution, and escalation. · Develop, implement, and maintain operational procedures, runbooks, and knowledge base articles to standardize incident resolution and service request fulfillment. · Drive continuous improvement initiatives to optimize operational efficiency, reduce incident rates, and improve service request turnaround times. · Collaborate with backend engineering and development teams to troubleshoot complex issues, identify root causes, and implement preventative measures. · Ensure adherence to defined SLAs (Service Level Agreements) and KPIs (Key Performance Indicators) for operational performance.Maintain operational documentation, including system diagrams, contact lists, and escalation paths. · Ensure compliance with relevant security and compliance policies. · Plan and coordinate scheduled maintenance activities with minimal impact to service availability. WHAT YOU BRING · Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. · Minimum of 3+ years of experience in IT/Cloud operations and application support (specifically Java apps), with knowledge of cloud infrastructure (AWS and Azure). · Strong experience with application support (Java, Grails, Hibernate) and performance analysis in a production environment, able to pinpoint a performance degradation through analysis. · Strong understanding of cloud computing concepts, architectures, and services on both AWS and Azure platforms. · Working knowledge of containerization and orchestration technologies, specifically Kubernetes.End-to-end technical accountability and operational ownership.Willingness to work in a 24/7 operating model. · Experience managing and troubleshooting network connectivity, including VPNs and connections to external networks. · Familiarity with monitoring tools and practices, with experience in setting up and responding to alerts. · Hands-on experience with log management and analysis tools, preferably Elastic Search. · Working knowledge of database systems, preferably MySQL, including L2 troubleshooting and performance monitoring. · Experience with ITSM (IT Service Management) systems, preferably FreshService, including incident, problem, and service request management processes. · Excellent problem-solving, analytical, and troubleshooting skills with a data-driven approach.Experience with Grafana systems and dashboards is a plus. · Strong communication (written and verbal), interpersonal, and presentation skills. · Ability to work effectively under pressure and manage multiple priorities in a fast-paced environment. · Experience in developing and documenting operational procedures and runbooks. · Experience with automation tools and scripting languages (e.g., Python, Bash) is a plus. · Experience working in a SaaS environment is highly desirable. · Working knowledge of database systems, preferably MySQL, including L2 troubleshooting and performance monitoring. · Experience with ITSM (IT Service Management) systems, preferably FreshService, including incident, problem, and service request management processes. · Excellent problem-solving, analytical, and troubleshooting skills with a data-driven approach.Experience with Grafana systems and dashboards is a plus. · Strong communication (written and verbal), interpersonal, and presentation skills. · Ability to work effectively under pressure and manage multiple priorities in a fast-paced environment. · Experience in developing and documenting operational procedures and runbooks. · Experience with automation tools and scripting languages (e.g., Python, Bash) is a plus. · Experience working in a SaaS environment is highly desirable. We offer you a competitive total rewards package, learning and tremendous opportunities to grow and advance in your career. At Saviynt, it is not typical for an individual to be hired at or near the top of the range for their role and final compensation decisions are dependent on many factors including, but are not limited to location; skill sets; experience and training; licensure and certifications; and other relevant business and organizational needs. A reasonable estimate of the current range is $Min,000 - $Max,000 annually. You may also be eligible to participate in a Saviynt discretionary bonus plan, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance.If required for this role, you will:Complete security & privacy literacy and awareness training during onboarding and annually thereafterReview (initially and annually thereafter), understand, and adhere to Information Security/Privacy Policies and Procedures such as (but not limited to): > Data Classification, Retention & Handling Policy> Incident Response Policy/Procedures> Business Continuity/Disaster Recovery Policy/Procedures> Mobile Device Policy> Account Management Policy> Access Control Policy> Personnel Security Policy> Privacy Policy Saviynt is an amazing place to work. We are a high-growth, Platform as a Service company focused on Identity Authority to power and protect the world at work. You will experience tremendous growth and learning opportunities through challenging yet rewarding work that directly impacts our customers, all within a welcoming and positive work environment. If you're resilient and enjoy working in a dynamic environment you belong with us! Saviynt is an equal opportunity employer and we welcome everyone to our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.

Posted 19 hours ago

Apply

0 years

0 Lacs

India

Remote

Role: Oracle Order Management cloud Location: Remote, India Working hours- 2.00 PM to 11.00 PM IST Job Description: 1) Custom node addition and custom DOO (Distributed Order Orchestration) configurations with various pauses/release criteria and compensation rules 2) Configuration of Oracle Extensions 3) Various Pricing algorithms fitting business needs using pricing attributes and various fields in order 4) Configuration of PO mapper wherein SO fields can be taken on PO (example: SO EFF taken as PO price) 5) Configuration of OM to AR mapper 6) Configuration of various external connectors and routings rules 7) Adding custom line status based on custom DOO node. (Distributed Order Orchestration)

Posted 19 hours ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: AI/ML Agent Developer Location: All EXL Locations Department: Artificial Intelligence & Data Science Reports To: Director of AI Engineering / Head of Intelligent Automation Position Summary: We are seeking an experienced and innovative AI/ML Agent Developer to design, develop, and deploy intelligent agents within a multi-agent orchestration framework. This role involves building autonomous agents that leverage LLMs, reinforcement learning, prompt engineering, and decision-making strategies to perform complex data and workflow tasks. You’ll work closely with cross-functional teams to operationalize AI across diverse use cases such as annotation, data quality, knowledge graph construction, and enterprise automation. Key Responsibilities: Design and implement modular, reusable AI agents capable of autonomous decision-making using LLMs, APIs, and tools like LangChain, AutoGen, or Semantic Kernel. Engineer prompt strategies for task-specific agent workflows (e.g., document classification, summarization, labeling, sentiment detection). Integrate ML models (NLP, CV, RL) into agent behavior pipelines to support inference, learning, and feedback loops. Contribute to multi-agent orchestration logic including task delegation, tool selection, message passing, and memory/state management. Collaborate with MLOps, data engineering, and product teams to deploy agents at scale in production environments. Develop and maintain agent evaluations, unit tests, and automated quality checks for reliability and interpretability. Monitor and refine agent performance using logging, observability tools, and feedback signals. Required Qualifications: Bachelor’s or Master’s in Computer Science, AI/ML, Data Science, or related field. 3+ years of experience in developing AI/ML systems; 1+ year in agent-based architectures or LLM-enabled automation. Proficiency in Python and ML libraries (PyTorch, TensorFlow, scikit-learn). Experience with LLM frameworks (LangChain, AutoGen, OpenAI, Anthropic, Hugging Face Transformers). Strong grasp of NLP, prompt engineering, reinforcement learning, and decision systems. Knowledge of cloud environments (AWS, Azure, GCP) and CI/CD for AI systems. Preferred Skills: Familiarity with multi-agent frameworks and agent orchestration design patterns. Experience in building autonomous AI applications for data governance, annotation, or knowledge extraction. Background in human-in-the-loop systems, active learning, or interactive AI workflows. Understanding of vector databases (e.g., FAISS, Pinecone) and semantic search. Why Join Us: Work at the forefront of AI orchestration and intelligent agents. Collaborate with a high-performing team driving innovation in enterprise AI platforms. Opportunity to shape the future of AI-based automation in real-world domains like healthcare, finance, and unstructured data.

Posted 20 hours ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Why this role matters NRev is in true zero-to-one territory: we’re building an AI-powered Revenue Orchestration platform that lets GTM teams spin up custom agents to automate, enrich, and accelerate every step of the enterprise sales cycle. We have early revenue, rabidly enthusiastic design-partners, and a awesome product. About the role: We are looking for a GTM Engineer who deeply understands LLM's, agentic workflows & modern marketing systems. You'll be responsible for building systems that power our outbound GTM motions from intelligent lead scoring & enrichment to Messaging, campagins & automation pipelines. You will be working at the intersection of engineering, marketing & AI, building scalable & personalized systems that drive revenue. What you will do: Build AI agents that automate prospecting, enrichment, scoring, and outbound across channels (email, LinkedIn, ads). Design workflows to execute GTM campaigns autonomously. Develop and maintain agents for tasks like: Hyper-personalized outreach Campaign planning Market analysis Intent analysis from search, web, and CRM data Optimize token use, context windows, retrieval pipelines (RAG), and prompt engineering. Marketing Infrastructure & Experimentation Set up tracking, attribution, segmentation, and cohort reporting across channels. Run growth experiments using AI agents (e.g., personalized outbound campaigns, landing page testing, etc.). Automate repetitive marketing ops like lead routing, qualification, and CRM hygiene. Collaboration Work closely with marketing, sales, product, and founders to identify GTM bottlenecks and build systems to fix them. Document workflows, train teammates, and improve tooling over time. You'll thrive if you have Strong LLM know-how: You’ve built with any open-source models, and know how to optimize agents, prompts, and workflows. Agentic design experience: You understand how to architect agentic workflows on your own. Marketing understanding: You can speak the language of MQLs, attribution, ICPs, TAM/SAM/SOM, CAC/LTV, and understand what drives B2B GTM. Builder’s mindset: You’re not just integrating tools; you’re creating new systems and ideas to help us scale 10x. Optional but nice: Python, JavaScript, TypeScript, SQL. Why Join Us Join a fast-moving team building AI-powered GTM systems from the ground up. Work closely with founders, sales, and growth — real impact, real ownership. Build automations that replace headcount and unlock new growth levers. Flexible hours, async culture, and a strong bias for action. We're looking for a GTM Engineer who deeply understands LLMs , agentic workflows , and modern marketing systems . You’ll be responsible for building systems that power our outbound GTM motions — from intelligent lead scoring and enrichment to AI-powered messaging, campaigns, and automation pipelines. You’ll work at the intersection of engineering, marketing, and AI , building scalable and personalized systems that drive revenue — not with headcount, but with intelligent automation. About the Role We're looking for a GTM Engineer who deeply understands LLMs , agentic workflows , and modern marketing systems . You’ll be responsible for building systems that power our outbound GTM motions — from intelligent lead scoring and enrichment to AI-powered messaging, campaigns, and automation pipelines. You’ll work at the intersection of engineering, marketing, and AI , building scalable and personalized systems that drive revenue — not with headcount, but with intelligent automation. About the Role About the Role We're looking for a GTM Engineer who deeply understands LLMs , agentic workflows , and modern marketing systems . You’ll be responsible for building systems that power our outbound GTM motions — from intelligent lead scoring and enrichment to AI-powered messaging, campaigns, and automation pipelines. You’ll work at the intersection of engineering, marketing, and AI , building scalable and personalized systems that drive revenue — not with headcount, but with intelligent automation We're looking for a GTM Engineer who deeply understands LLMs , agentic workflows , and modern marketing systems . You’ll be responsible for building systems that power our outbound GTM motions — from intelligent lead scoring and enrichment to AI-powered messaging, campaigns, and automation pipelines. You’ll work at the intersection of engineering, marketing, and AI , building scalable and personalized systems that drive revenue — not with headcount, but with intelligent automation

Posted 20 hours ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Gen AI / Python Developer (Contract) Work Location: Pune (Hybrid) Contract Tenure: 12 Months About KX KX software powers the time-aware data-driven decisions that enable fast-moving companies to outpace competitors, realizing the full potential of their AI investments. The KX platform delivers transformational value by addressing data challenges related to completeness, timeliness and efficiency, ensuring companies understand change over time and can achieve faster, more accurate insights at any scale, cost-effectively. KX is essential to the operations of the world's top investment banks, aerospace and defence, high-tech manufacturing, healthcare and life sciences, automotive and fleet telematics organizations. The company has established offices and a robust customer base across North America, Europe, and Asia Pacific. Overview Of The Role KX is hiring a Gen AI / Python Developer to support our Generative AI and cloud-native application initiatives. This is a contract role where you'll contribute to building AI/ML-powered pipelines and infrastructure that drive real-time data intelligence. You'll work closely with global R&D and engineering teams, leveraging Python, containerized microservices, and GenAI frameworks to accelerate innovation. Key Responsibilities Build and support Python-based applications powering AI/ML and real-time data systems Develop and optimize cloud-native solutions for high-performance data workloads Automate deployments using Docker, CI/CD and GitOps practices Contribute to scalable architectures and assist in LLM or GenAI framework integration Skills Python Development: Strong coding skills with libraries for data processing and automation Cloud Engineering: Experience deploying in AWS/GCP/Azure environments DevOps & Containers: Proficient with Docker, CI/CD tools, and Git workflows Data & API Integration: Knowledge of analytics pipelines, REST APIs, and microservices GenAI & LLM Exposure: Familiarity with LangChain, Hugging Face, or similar frameworks Communication: Strong problem-solving and cross-functional collaboration skills Essential Experience 3+ years of Python development experience in cloud environments Strong knowledge of Python libraries, data processing and automation scripting Experience with Docker, CI/CD tools and version control (Git) Exposure to data analytics, container orchestration and API integrations Good communication and problem-solving skills Preferred Qualifications Familiarity with LLMs, NLP pipelines or frameworks like LangChain, Hugging Face Experience with cloud platforms (AWS/GCP/Azure) Understanding of microservices and DevOps principles Why Choose KX Data Driven: We lead with instinct and follow fact. Naturally Curious: We lean in, listen and learn fast. All In: We take ownership, take on challenges and give it our all.

Posted 20 hours ago

Apply

8.0 - 12.0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Greetings from Tata Consultancy Service!!! Job Title: Network Pre-Sales Solution Architect Experience Required: 8-12 years Location: PAN INDIA  Must have worked in Network operations and Deployment like Datacenter build, Migration of Network, etc  Have hands-on experience on the Network devices such as Routers, Switches, Wireless and Network Authentications, Remote Access VPN, Firewalls, IPS/IDS, Load Balancer, Network Management tools  Have experience in designing the Network solutions for new Datacenter build, new office site build  Experience in Network solutions (Presales) and have worked on RFP/RFI / proactive engagements  Understand different Network vendor products and ability to choose the right match for customer requirement based on technology and cost impact analysis  Understand the high-level technical difference between the OEM vendors such as SD-WAN between Viptela, Silver peak and Fortinet  Preferrable experience in working with multiple OEM vendors on creating the design, BoM, Cost estimations.  Good experience in writing technical solution document for customer submission  Have good experience in creating PPT for the customer solution defense  Capable to present the technical solution to customer, have fluent communication skills and presentation skills  Able to create Pre-Sales solution response in document, PPT and explain clearly to customer on reasons for proposed solution.  Have analytical ability skills to understand Customer pulse on requirements, Objectives, expectations and perform Pre-Sales solution with proper business case and justification and winning approach.  Candidates with experience preferable on Load balancers, Firewalls, NMS & OEM Native Tools, DDI, Network Automation & Orchestration, Firewall, IPS, IDS , Application Delivery Controller , WAN sizing, SDN, SDWAN SD LAN , Cloud Networking, Network SaaS solutions etc.  Candidates who worked on RFX deals for Fortune 500 Global customers and converted that opportunity to positive would be given preference  Have strategic decisions making skills  Basic knowledge on Cloud Network skills and work with different internal teams like Compute, Workplace, Public Cloud, Private Cloud, Transition team, Security Team to meet the solution RFX requirement.  Work closely with Enterprise Solution architect and Sales Customer focus team to understand their objective and to win the Deals.  Certifications from leading Networking vendors such as CCNP, Aruba, Juniper, CCIE preferable.

Posted 20 hours ago

Apply

8.0 - 12.0 years

0 Lacs

Delhi, India

Remote

Greetings from Tata Consultancy Service!!! Job Title: Network Pre-Sales Solution Architect Experience Required: 8-12 years Location: PAN INDIA  Must have worked in Network operations and Deployment like Datacenter build, Migration of Network, etc  Have hands-on experience on the Network devices such as Routers, Switches, Wireless and Network Authentications, Remote Access VPN, Firewalls, IPS/IDS, Load Balancer, Network Management tools  Have experience in designing the Network solutions for new Datacenter build, new office site build  Experience in Network solutions (Presales) and have worked on RFP/RFI / proactive engagements  Understand different Network vendor products and ability to choose the right match for customer requirement based on technology and cost impact analysis  Understand the high-level technical difference between the OEM vendors such as SD-WAN between Viptela, Silver peak and Fortinet  Preferrable experience in working with multiple OEM vendors on creating the design, BoM, Cost estimations.  Good experience in writing technical solution document for customer submission  Have good experience in creating PPT for the customer solution defense  Capable to present the technical solution to customer, have fluent communication skills and presentation skills  Able to create Pre-Sales solution response in document, PPT and explain clearly to customer on reasons for proposed solution.  Have analytical ability skills to understand Customer pulse on requirements, Objectives, expectations and perform Pre-Sales solution with proper business case and justification and winning approach.  Candidates with experience preferable on Load balancers, Firewalls, NMS & OEM Native Tools, DDI, Network Automation & Orchestration, Firewall, IPS, IDS , Application Delivery Controller , WAN sizing, SDN, SDWAN SD LAN , Cloud Networking, Network SaaS solutions etc.  Candidates who worked on RFX deals for Fortune 500 Global customers and converted that opportunity to positive would be given preference  Have strategic decisions making skills  Basic knowledge on Cloud Network skills and work with different internal teams like Compute, Workplace, Public Cloud, Private Cloud, Transition team, Security Team to meet the solution RFX requirement.  Work closely with Enterprise Solution architect and Sales Customer focus team to understand their objective and to win the Deals.  Certifications from leading Networking vendors such as CCNP, Aruba, Juniper, CCIE preferable.

Posted 20 hours ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

About the role We’re looking for Senior Engineering Manager to lead our Data / AI Platform and MLOps teams at slice. In this role, you’ll be responsible for building and scaling a high-performing team that powers data infrastructure, real-time streaming, ML enablement, and data accessibility across the company. You'll partner closely with ML, product, platform, and analytics stakeholders to build robust systems that deliver high-quality, reliable data at scale. You will drive AI initiatives to centrally build AP platform and apps which can be leveraged by various functions like legal, CX, product in a secured manner This is a hands-on leadership role perfect for someone who enjoys solving deep technical problems while growing people and teams. What You Will Do Lead and grow the data platform pod focused on all aspects of data (batch + real-time processing, ML platform, AI tooling, Business reporting, Data products – enabling product experience through data) Maintain hands-on technical leadership - lead by example through code reviews, architecture decisions, and direct technical contribution Partner closely with product and business stakeholders to identify data-driven opportunities and translate business requirements into scalable data solutions Own the technical roadmap for our data platform including infra modernization, performance, scalability, and cost efficiency Drive the development of internal data products like self-serve data access, centralized query layers, and feature stores Build and scale ML infrastructure with MLOps best practices including automated pipelines, model monitoring, and real-time inference systems Lead AI platform development for hosting LLMs, building secure AI applications, and enabling self-service AI capabilities across the organization Implement enterprise AI governance including model security, access controls, and compliance frameworks for internal AI applications Collaborate with engineering leaders across backend, ML, and security to align on long-term data architecture Establish and enforce best practices around data governance, access controls, and data quality Ensure regulatory compliance with GDPR, PCI-DSS, SOX through automated compliance monitoring and secure data pipelines Implement real-time data processing for fraud detection and risk management with end-to-end encryption and audit trails Coach engineers and team leads through regular 1:1s, feedback, and performance conversations What You Will Need 10+ years of engineering experience, including 2+ years managing data or infra teams with proven hands-on technical leadership Strong stakeholder management skills with experience translating business requirements into data solutions and identifying product enhancement opportunities Strong technical background in data platforms, cloud infrastructure (preferably AWS), and distributed systems Experience with tools like Apache Spark, Flink, EMR, Airflow, Trino/Presto, Kafka, and Kubeflow/Ray plus modern stack: dbt, Databricks, Snowflake, Terraform Hands on experience building AI/ML platforms including MLOps tools and experience with LLM hosting, model serving, and secure AI application development Proven experience improving performance, cost, and observability in large-scale data systems Expert-level cloud platform knowledge with container orchestration (Kubernetes, Docker) and Infrastructure-as-Code Experience with real-time streaming architectures (Kafka, Redpanda, Kinesis) Understanding of AI/ML frameworks (TensorFlow, PyTorch), LLM hosting platforms, and secure AI application development patterns Comfort working in fast-paced, product-led environments with ability to balance innovation and regulatory constraints Bonus: Experience with data security and compliance (PII/PCI handling), LLM infrastructure, and fintech regulations Life at slice Life so good, you’d think we’re kidding: Competitive salaries. Period. An extensive medical insurance that looks out for our employees & their dependents. We’ll love you and take care of you, our promise. Flexible working hours. Just don’t call us at 3AM, we like our sleep schedule. Tailored vacation & leave policies so that you enjoy every important moment in your life. A reward system that celebrates hard work and milestones throughout the year. Expect a gift coming your way anytime you kill it here. Learning and upskilling opportunities. Seriously, not kidding. Good food, games, and a cool office to make you feel like home. An environment so good, you’ll forget the term “colleagues can’t be your friends”.

Posted 20 hours ago

Apply

0 years

0 Lacs

Kochi, Kerala, India

On-site

We are seeking a highly skilled and experienced Team Lead - PHP Laravel Developer to oversee a team of developers, ensure high-quality project delivery, and contribute to building scalable web applications. The ideal candidate should have an in-depth understanding of PHP, Laravel, and modern development practices, with proven leadership experience. Key Skills and Qualifications: Technical Skills: ● PHP Laravel: Expertise in building scalable applications, designing RESTful APIs. ● MySQL: Skilled in database design, optimization, and writing complex SQL queries. ● Microservices Architecture: Hands-on experience with microservices, containerization tools like Docker, and orchestration tools like Kubernetes. ● Git: Proficient in version control, including branching, merging, and repository management. Leadership & Methodologies: ● Proven experience leading and mentoring development teams. ● Strong expertise in Agile development methodologies such as Scrum and Kanban. ● Ability to plan and manage sprints effectively. Soft Skills: ● Strong problem-solving skills with the ability to think critically and creatively. ● Excellent communication and collaboration abilities. ● Strong interpersonal and team management skills. Experience Required : 8- 10 yrs

Posted 20 hours ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

DevOps Engineer Talent Worx is looking for a dynamic and skilled DevOps Engineer to join our progressive team. In this role, you will integrate development and operations to optimize and streamline our software delivery processes. Your expertise in automation, system management, and cloud services will contribute significantly to our commitment to delivering high-quality software solutions efficiently and effectively. Requirements Key Responsibilities: Implement and manage CI/CD pipelines to ensure smooth deployment and integration of software updates Automate infrastructure provisioning and configuration management using Infrastructure as Code (IaC) tools Work closely with development teams to refine and optimize build processes, deployment strategies, and collaboration practices Monitor the system's performance, ensuring high availability and reliability of applications and services Manage system security, including setup, deployment, and maintenance of firewall and access control systems Continuously improve system performance through proactive monitoring and maintenance Assist in troubleshooting issues across the development and production environments Document processes, best practices, and configurations for reproducibility and efficiency Required Skills and Qualifications: 5+ years of experience as a DevOps Engineer or in a similar role Proficiency in scripting languages such as Python, Shell, or Bash for automation tasks Hands-on experience with CI/CD tools (e.g., Jenkins, GitHub Actions) and version control systems (e.g., Git) In-depth knowledge of cloud platforms such as AWS, Azure, or Google Cloud Familiarity with container technologies like Docker and orchestration tools like Kubernetes Understanding of databases, both SQL and NoSQL, including implementation and management Strong analytical and troubleshooting skills, with a keen focus on operational excellence Exceptional communication skills and the ability to work collaboratively in a team environment Preferred Skills: Knowledge of microservices architecture and serverless architectures Experience in implementing automated security practices in CI/CD pipelines Familiarity with Agile methodologies and frameworks Education: Bachelor's degree in Computer Science, Engineering, or a related field Benefits Talworx is an emerging recruitment consulting and services firm, we are hiring for our Product based health care client which is a leading precision medicine company focused on guarding wellness and giving every person more time free from cancer. Founded in 2012, we're transforming patient care by providing critical insights into what drives disease through its advanced blood and tissue tests, real-world data and AI analytics.

Posted 20 hours ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Title: Senior Google Cloud Platform (GCP) Data Engineer Location: Hybrid (Bengaluru, India) Job Type: Full-Time Experience Required: Minimum 6 Years Joining: Immediate or within 1 week About the Company: Tech T7 Innovations is a global IT solutions provider known for delivering cutting-edge technology services to enterprises across various domains. With a team of seasoned professionals, we specialize in software development, cloud computing, data engineering, machine learning, and cybersecurity. Our focus is on leveraging the latest technologies and best practices to create scalable, reliable, and secure solutions for our clients. Job Summary: We are seeking a highly skilled Senior GCP Data Engineer with over 6 years of experience in data engineering and extensive hands-on expertise in Google Cloud Platform (GCP). The ideal candidate must have a strong foundation in GCS, BigQuery, Apache Airflow/Composer, and Python, with a demonstrated ability to design and implement robust, scalable data pipelines in a cloud environment. Roles and Responsibilities: Design, develop, and deploy scalable and secure data pipelines using Google Cloud Platform components including GCS, BigQuery, and Airflow. Develop and manage robust ETL/ELT workflows using Python and integrate with orchestration tools such as Apache Airflow or Cloud Composer. Collaborate with data scientists, analysts, and business stakeholders to gather requirements and deliver reliable and efficient data solutions. Optimize BigQuery performance using best practices such as partitioning, clustering, schema design , and query tuning . Manage, monitor, and maintain data lake and data warehouse environments with high availability and integrity. Automate pipeline monitoring, error handling, and alerting mechanisms to ensure seamless and reliable data delivery . Contribute to architecture decisions involving data modeling, data flow, and integration strategies in a cloud-native environment. Ensure compliance with data governance , privacy, and security policies as per enterprise and regulatory standards. Mentor junior engineers and drive best practices in cloud engineering and data operations . Mandatory Skills: Google Cloud Platform (GCP): In-depth hands-on experience with GCS, BigQuery, IAM, and Cloud Functions. BigQuery (BQ): Expertise in large-scale analytics, schema optimization, and data modeling. Google Cloud Storage (GCS): Strong understanding of data lifecycle management, access controls, and best practices. Apache Airflow / Cloud Composer: Proficiency in writing and managing complex DAGs for data orchestration. Python Programming: Advanced skills in automation, API integration, and data processing using libraries like Pandas, PySpark, etc. Preferred Qualifications: Experience with CI/CD pipelines for data infrastructure and workflows. Exposure to other GCP services like Dataflow , Pub/Sub , and Cloud Functions . Familiarity with Infrastructure as Code (IaC) tools such as Terraform . Strong communication and analytical skills for problem-solving and stakeholder engagement. GCP Certifications (e.g., Professional Data Engineer) will be a significant advantage

Posted 21 hours ago

Apply

8.0 - 12.0 years

0 Lacs

Mumbai Metropolitan Region

Remote

Greetings from Tata Consultancy Service!!! Job Title: Network Pre-Sales Solution Architect Experience Required: 8-12 years Location: PAN INDIA  Must have worked in Network operations and Deployment like Datacenter build, Migration of Network, etc  Have hands-on experience on the Network devices such as Routers, Switches, Wireless and Network Authentications, Remote Access VPN, Firewalls, IPS/IDS, Load Balancer, Network Management tools  Have experience in designing the Network solutions for new Datacenter build, new office site build  Experience in Network solutions (Presales) and have worked on RFP/RFI / proactive engagements  Understand different Network vendor products and ability to choose the right match for customer requirement based on technology and cost impact analysis  Understand the high-level technical difference between the OEM vendors such as SD-WAN between Viptela, Silver peak and Fortinet  Preferrable experience in working with multiple OEM vendors on creating the design, BoM, Cost estimations.  Good experience in writing technical solution document for customer submission  Have good experience in creating PPT for the customer solution defense  Capable to present the technical solution to customer, have fluent communication skills and presentation skills  Able to create Pre-Sales solution response in document, PPT and explain clearly to customer on reasons for proposed solution.  Have analytical ability skills to understand Customer pulse on requirements, Objectives, expectations and perform Pre-Sales solution with proper business case and justification and winning approach.  Candidates with experience preferable on Load balancers, Firewalls, NMS & OEM Native Tools, DDI, Network Automation & Orchestration, Firewall, IPS, IDS , Application Delivery Controller , WAN sizing, SDN, SDWAN SD LAN , Cloud Networking, Network SaaS solutions etc.  Candidates who worked on RFX deals for Fortune 500 Global customers and converted that opportunity to positive would be given preference  Have strategic decisions making skills  Basic knowledge on Cloud Network skills and work with different internal teams like Compute, Workplace, Public Cloud, Private Cloud, Transition team, Security Team to meet the solution RFX requirement.  Work closely with Enterprise Solution architect and Sales Customer focus team to understand their objective and to win the Deals.  Certifications from leading Networking vendors such as CCNP, Aruba, Juniper, CCIE preferable.

Posted 21 hours ago

Apply

5.0 years

18 - 25 Lacs

Hyderabad, Telangana, India

On-site

Role: Senior .NET Engineer Experience: 5-12 Years Location: Hyderabad This is a WFO (Work from Office) role. Mandatory Skills: Dot Net Core, C#, Kafka, CI/CD pipelines, Observability tools, Orchestration tools, Cloud Microservices Interview Process First round - Online test Second round - Virtual technical discussion Manager/HR round - Virtual discussion Required Qualification Company Overview It is a globally recognized leader in the fintech industry, delivering cutting-edge trading solutions for professional traders worldwide. With over 15 years of excellence, a robust international presence, and a team of over 300+ skilled professionals, we continually push the boundaries of technology to remain at the forefront of financial innovation. Committed to fostering a collaborative and dynamic environment, our prioritizes technical excellence, innovation, and continuous growth for our team. Join our agile-based team to contribute to the development of advanced trading platforms in a rapidly evolving industry. Position Overview We are seeking a highly skilled Senior .NET Engineer to play a pivotal role in the design, development, and optimization of highly scalable and performant domain-driven microservices for our real-time trading applications. This role demands advanced expertise in multi-threaded environments, asynchronous programming, and modern software design patterns such as Clean Architecture and Vertical Slice Architecture. As part of an Agile Squad, you will collaborate with cross-functional teams to deliver robust, secure, and efficient systems, adhering to the highest standards of quality, performance, and reliability. This position is ideal for engineers who excel in building low-latency, high-concurrency systems and have a passion for advancing fintech solutions. Key Responsibilities System Design and Development Architect and develop real-time, domain-driven microservices using .NET Core to ensure scalability, modularity, and performance. Leverage multi-threaded programming techniques and asynchronous programming paradigms to build systems optimized for high-concurrency workloads. Implement event-driven architectures to enable seamless communication between distributed services, leveraging tools such as Kafka or AWS SQS. System Performance and Optimization Optimize applications for low-latency and high-throughput in trading environments, addressing challenges related to thread safety, resource contention, and parallelism. Design fault-tolerant systems capable of handling large-scale data streams and real-time events. Proactively monitor and resolve performance bottlenecks using advanced observability tools and techniques. Architectural Contributions Contribute to the design and implementation of scalable, maintainable architectures, including Clean Architecture, Vertical Slice Architecture, and CQRS. Collaborate with architects and stakeholders to align technical solutions with business requirements, particularly for trading and financial systems. Employ advanced design patterns to ensure robustness, fault isolation, and adaptability. Agile Collaboration Participate actively in Agile practices, including Scrum ceremonies such as sprint planning, daily stand-ups and retrospectives.. Collaborate with Product Owners and Scrum Masters to refine technical requirements and deliver high-quality, production-ready software. Code Quality and Testing Write maintainable, testable, and efficient code adhering to test-driven development (TDD) methodologies. Conduct detailed code reviews, ensuring adherence to best practices in software engineering, coding standards, and system architecture. Develop and maintain robust unit, integration, and performance tests to uphold system reliability and resilience. Monitoring and Observability Integrate Open Telemetry to enhance system observability, enabling distributed tracing, metrics collection, and log aggregation. Collaborate with DevOps teams to implement real-time monitoring dashboards using tools such as Prometheus, Grafana, and Elastic (Kibana). Ensure systems are fully observable, with actionable insights into performance and reliability metrics. Required Expertise- Technical Expertise And Skills 5+ years of experience in software development, with a strong focus on .NET Core and C#. Deep expertise in multi-threaded programming, asynchronous programming, and handling concurrency in distributed systems. Extensive experience in designing and implementing domain-driven microservices with advanced architectural patterns like Clean Architecture or Vertical Slice Architecture. Strong understanding of event-driven systems, with knowledge of messaging frameworks such as Kafka, AWS SQS, or RabbitMQ. Proficiency in observability tools, including Open Telemetry, Prometheus, Grafana, and Elastic (Kibana). Hands-on experience with CI/CD pipelines, containerization using Docker, and orchestration tools like Kubernetes. Expertise in Agile methodologies under Scrum practices. Solid knowledge of Git and version control best practices. Beneficial Skills Familiarity with Saga patterns for managing distributed transactions. Experience in trading or financial systems, particularly with low-latency, high-concurrency environments. Advanced database optimization skills for relational databases such as SQL Server. Certifications And Education Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. Relevant certifications in software development, system architecture, or AWS technologies are advantageous. Why Join? Exceptional team building and corporate celebrations Be part of a high-growth, fast-paced fintech environment. Flexible working arrangements and supportive culture. Opportunities to lead innovation in the online trading space. Skills: observability tools,docker,git,grafana,dot net core,agile methodologies,cqrs,asynchronous programming,event-driven systems,.net core,ci/cd pipeline,kubernetes,tdd,open telemetry,vertical slice architecture,elastic (kibana),cloud microservices,orchestration tools,containerization using docker,event-driven architectures,c#,aws sqs,clean architecture,sql server,kafka,prometheus,scrum practices,test-driven development (tdd),multi-threaded programming,.net,event-driven architecture,ci/cd pipelines

Posted 21 hours ago

Apply

8.0 years

0 Lacs

India

Remote

Job Title: Senior DotNet Developer Experience: 8+ Years Location: Remote Contract Duration: Long Term Work Time: 12.00 PM to 9.00 PM Job Purpose Looking for candidates with 8+ years of experience in the IT industry, possessing strong skills in .Net/.Net Core, Azure Cloud Service, and Azure DevOps. This role involves direct client interaction, so strong communication skills are essential. The candidate must be hands-on in coding and Azure Cloud technologies. Work hours are 8 hours daily, with a mandatory 4-hour overlap with the EST Time zone (12 PM - 9 PM) to accommodate meetings. Responsibilities Design, develop, enhance, document, and maintain applications using .NET Core 6/8+, C#, REST APIs, T-SQL, and modern JavaScript/jQuery Integrate and support third-party APIs and external services Collaborate with cross-functional teams to deliver scalable solutions across the full technology stack Identify, prioritize, and execute tasks throughout the Software Development Life Cycle (SDLC) Participate in Agile/Scrum ceremonies and manage tasks using Jira Understand technical priorities, architectural dependencies, risks, and implementation challenges Troubleshoot, debug, and optimize existing solutions with a focus on performance and reliability Primary Skills 8+ years of hands-on development experience with C#, .NET Core 6/8+, Entity Framework / EF Core Experience in JavaScript, jQuery, REST APIs Expertise in MS SQL Server including complex SQL queries, stored procedures, views, functions, packages, cursors, tables, and object types Unit testing experience using XUnit, MSTest Strong knowledge of software design patterns, system architecture, and scalable solution design Ability to lead and mentor teams through effective communication and technical ownership Strong problem-solving and debugging skills Ability to write reusable, testable, and efficient code Experience in developing and maintaining frameworks and shared libraries Strong technical documentation and leadership skills Experience with microservices and Service-Oriented Architecture (SOA) Hands-on experience in API integrations Minimum 2 years of experience with Azure Cloud Services including: Azure Functions Azure Durable Functions Azure Service Bus, Event Grid, Storage Queues Blob Storage, Azure Key Vault, SQL Azure Application Insights, Azure Monitoring Secondary Skills (Good to Have) Familiarity with AngularJS, ReactJS, and other front-end frameworks Experience with Azure API Management (APIM) Knowledge of Azure containerization and orchestration (AKS/Kubernetes) Experience with Azure Data Factory (ADF) and Logic Apps Exposure to application support and operational monitoring Experience with Azure DevOps, CI/CD pipelines (Classic / YAML) Certifications Required (If Any) Microsoft Certified: Azure Fundamentals Microsoft Certified: Azure Developer Associate Other relevant certifications in Azure, .NET, or Cloud technologies

Posted 21 hours ago

Apply

6.0 years

0 Lacs

Kochi, Kerala, India

Remote

Job Title: Senior Java Developer – Spring Boot Experience Required: 4–6 Years Location: Kochi Onsite Salary Range: Up to ₹15 LPA (commensurate with experience) Employment Type: Full-Time About the Role: We are seeking a Senior Java Developer with strong expertise in Spring Boot and backend systems to join our growing engineering team. The ideal candidate will have 4–6 years of solid development experience, be well-versed in building scalable applications, and possess a proactive, solution-driven mindset. Key Responsibilities: Design and develop scalable backend services using Java and Spring Boot. Own the development lifecycle: design, development, testing, deployment, and maintenance. Collaborate with architects, product managers, and cross-functional teams to deliver high-quality solutions. Write clean, reusable, and maintainable code following best practices and coding standards. Optimize application performance, reliability, and scalability. Conduct code reviews and mentor junior developers when needed. Integrate backend systems with databases, third-party APIs, and frontend applications. Required Skills & Qualifications: 4–6 years of hands-on experience in Java development. Strong expertise in Spring Boot , Spring MVC , Spring Data JPA , and RESTful API development. Solid understanding of object-oriented programming, design principles, and architectural patterns. Proficiency with relational databases like MySQL, PostgreSQL, or Oracle. Good experience with build tools and CI/CD pipelines (Maven/Gradle, Jenkins/GitLab CI). Familiarity with version control systems like Git. Strong analytical and problem-solving skills. Ability to work independently and lead technical discussions. Good communication and collaboration abilities. Nice to Have (Preferred but not mandatory): Experience with Microservices architecture and Cloud platforms (AWS/GCP/Azure). Exposure to Docker , Kubernetes , or other containerization/orchestration tools. Experience with unit testing frameworks (JUnit, Mockito). Knowledge of security best practices for backend development. Basic frontend exposure (Angular/React/Thymeleaf) is a plus. What We Offer: Salary up to ₹15 LPA , based on experience and fit. Dynamic and growth-oriented team environment. Flexible work model (remote/hybrid as applicable). Medical insurance and standard employee benefits. Skill enhancement and training support. Fast-track career growth opportunities. Application Process: To apply, send your updated CV to fazil.kt@dynamedhealth.com with the subject line: "Application – Senior Java Developer (Spring Boot)" .

Posted 21 hours ago

Apply

5.0 - 7.0 years

25 - 28 Lacs

Pune, Maharashtra, India

On-site

Job Description We are looking for a Big Data Engineer who will work on building, and managing Big Data Pipelines for us to deal with the huge structured data sets that we use as an input to accurately generate analytics at scale for our valued Customers. The primary focus will be on choosing optimal solutions to use for these purposes, then maintaining, implementing, and monitoring them. You will also be responsible for integrating them with the architecture used across the company. Core Responsibilities Design, build, and maintain robust data pipelines (batch or streaming) that process and transform data from diverse sources. Ensure data quality, reliability, and availability across the pipeline lifecycle. Collaborate with product managers, architects, and engineering leads to define technical strategy. Participate in code reviews, testing, and deployment processes to maintain high standards. Own smaller components of the data platform or pipelines and take end-to-end responsibility. Continuously identify and resolve performance bottlenecks in data pipelines. Take initiatives, and show the drive to pick up new stuff proactively, and work as a Senior Individual contributor on the multiple products and features we have. Required Qualifications 5 to 7 years of experience in Big Data or data engineering roles. JVM based languages like Java or Scala are preferred. For someone having solid Big Data experience, Python would also be OK. Proven and demonstrated experience working with distributed Big Data tools and processing frameworks like Apache Spark or equivalent (for processing), Kafka or Flink (for streaming), and Airflow or equivalent (for orchestration). Familiarity with cloud platforms (e.g., AWS, GCP, or Azure), including services like S3, Glue, BigQuery, or EMR. Ability to write clean, efficient, and maintainable code. Good understanding of data structures, algorithms, and object-oriented programming. Tooling & Ecosystem Use of version control (e.g., Git) and CI/CD tools. Experience with data orchestration tools (Airflow, Dagster, etc.). Understanding of file formats like Parquet, Avro, ORC, and JSON. Basic exposure to containerization (Docker) or infrastructure-as-code (Terraform is a plus). Skills: airflow,pipelines,data engineering,scala,ci,python,flink,aws,data orchestration,java,kafka,gcp,parquet,orc,azure,cd,dagster,ci/cd,git,avro,terraform,json,docker,apache spark,big data

Posted 22 hours ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

About Us: Paytm is India's leading mobile payments and financial services distribution company. Pioneer of the mobile QR payments revolution in India, Paytm builds technologies that help small businesses with payments and commerce. Paytm’s mission is to serve half a billion Indians and bring them to the mainstream economy with the help of technology. Job Summary: Build systems for collection & transformation of complex data sets for use in production systems Collaborate with engineers on building & maintaining back-end services Implement data schema and data management improvements for scale and performance Provide insights into key performance indicators for the product and customer usage Serve as team's authority on data infrastructure, privacy controls and data security Collaborate with appropriate stakeholders to understand user requirements Support efforts for continuous improvement, metrics and test automation Maintain operations of live service as issues arise on a rotational, on-call basis Verify whether data architecture meets security and compliance requirements and expectations .Should be able to fast learn and quickly adapt at rapid pace. java/scala, SQL, Minimum Qualifications: Bachelor's degree in computer science, computer engineering or a related field, or equivalent experience 3+ years of progressive experience demonstrating strong architecture, programming and engineering skills. Firm grasp of data structures, algorithms with fluency in programming languages like Java, Python, Scala. Strong SQL language and should be able to write complex queries. Strong Airflow like orchestration tools. Demonstrated ability to lead, partner, and collaborate cross functionally across many engineering organizations Experience with streaming technologies such as Apache Spark, Kafka, Flink. Backend experience including Apache Cassandra, MongoDB and relational databases such as Oracle, PostgreSQL AWS/GCP solid hands on with 4+ years of experience. Strong communication and soft skills. Knowledge and/or experience with containerized environments, Kubernetes, docker. Experience in implementing and maintained highly scalable micro services in Rest, Spring Boot, GRPC. Appetite for trying new things and building rapid POCs" Key Responsibilities : Design, develop, and maintain scalable data pipelines to support data ingestion, processing, and storage Implement data integration solutions to consolidate data from multiple sources into a centralized data warehouse or data lake Collaborate with data scientists and analysts to understand data requirements and translate them into technical specifications Ensure data quality and integrity by implementing robust data validation and cleansing processes Optimize data pipelines for performance, scalability, and reliability. Develop and maintain ETL (Extract, Transform, Load) processes using tools such as Apache Spark, Apache NiFi, or similar technologies .Monitor and troubleshoot data pipeline issues, ensuring timely resolution and minimal downtimeImplement best practices for data management, security, and complianceDocument data engineering processes, workflows, and technical specificationsStay up-to-date with industry trends and emerging technologies in data engineering and big data. Compensation: If you are the right fit, we believe in creating wealth for you with enviable 500 mn+ registered users, 25 mn+ merchants and depth of data in our ecosystem, we are in a unique position to democratize credit for deserving consumers & merchants – and we are committed to it. India’s largest digital lending story is brewing here. It’s your opportunity to be a part of the story!

Posted 22 hours ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About US At Particleblack, we drive innovation through intelligent experimentation with Artificial Intelligence. Our multidisciplinary team—comprising solution architects, data scientists, engineers, product managers, and designers—collaborates with domain experts to deliver cutting-edge R&D solutions tailored to your business. Job Overview: We are seeking a results-driven Delivery Manager with a strong technical background to lead enterprise-scale low-code product implementations. The ideal candidate will have hands-on experience with modern tech stacks, cloud platforms, and a deep understanding of delivering scalable, extensible, and integrated enterprise solutions. You will own the end-to-end delivery lifecycle — from scope definition to go-live — ensuring seamless execution across all implementation components, including solution setup, business logic enablement, system integration, data migration, and reporting. Your leadership will be critical in driving successful client onboarding and long-term platform adoption. Required Skills & Experience: Strong technical background with hands-on experience in Angular, Node.js, PostgreSQL. Experience with cloud platforms: AWS and Azure. Familiarity with integration/orchestration tools such as n8n, WSO2, or similar. Proven track record in low-code platform implementations across multiple clients or domains. Solid understanding of Agile methodologies, especially Scrum, with expertise in using JIRA for delivery tracking. Strong communication and stakeholder management skills, including working directly with clients and internal teams. Experience with managing delivery across functional areas such as integration, data migration, and reporting. Preferred Qualifications: Bachelor’s or Master’s in Computer Science, Engineering, or related field. Certified Scrum Master or equivalent Agile certifications are a plus. Prior experience in GovTech, Health & Human Services, or similar domains is desirable.

Posted 22 hours ago

Apply

6.0 years

0 Lacs

India

On-site

We are seeking a skilled and proactive Platform Lead with strong Snowflake expertise and AWS cloud exposure to lead the implementation and operational excellence of a scalable, multitenant modern data platform for a leading US-based marketing agency serving nonprofit clients. This role requires hands-on experience in managing Snowflake environments, supporting data pipeline orchestration, enforcing platform-level standards, and ensuring observability, performance, and security across environments. You will collaborate with architects, engineers, and DevOps teams to operationalize the platform’s design and drive its long-term stability and scalability in a cloud-native ecosystem. Job Specific Duties & Responsibilities: Lead the technical implementation and stability of the multitenant Snowflake data platform across dev, QA, and prod environments Design and manage schema isolation, role-based access control (RBAC), masking policies, and cost-optimized Snowflake architecture for multiple nonprofit tenants Implement and maintain CI/CD pipelines for dbt, Snowflake objects, and metadata-driven ingestion processes using GitHub Actions or similar tools Develop and maintain automation accelerators for data ingestion, schema validation, error handling, and onboarding new clients at scale Collaborate with architects and data engineers to ensure seamless integration with source CRMs, ByteSpree connectors, and downstream BI/reporting layers Monitor and optimize performance of Snowflake workloads (e.g., query tuning, warehouse sizing, caching strategy) to ensure reliability and scalability Establish and maintain observability and monitoring practices across data pipelines, ingestion jobs, and platform components (e.g., error tracking, data freshness, job status dashboards) Manage infrastructure-as-code (IaC), configuration templates, and version control practices across the data stack Ensure robust data validation, quality checks, and observability mechanisms are in place across all platform services Support incident response, pipeline failures, and technical escalations in production, coordinating across engineering and client teams Contribute to data governance compliance by implementing platform-level policies for PII, lineage tracking, and tenant-specific metadata tagging Required Skills, Experience & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Engineering, or a related technical field 6+ years of experience in data engineering or platform delivery, including 3+ years of hands-on Snowflake experience in production environments Proven expertise in building and managing multi tenant data platforms, including schema isolation, RBAC, and masking policies Solid knowledge of CI/CD practices for data projects, with experience guiding pipeline implementations using tools like GitHub Actions Hands-on experience with dbt, SQL, and metadata-driven pipeline design for large-scale ingestion and transformation workloads Strong understanding of AWS cloud services relevant to data platforms (e.g., S3, IAM, Lambda, CloudWatch, Secrets Manager) Experience optimizing Snowflake performance, including warehouse sizing, caching, and cost control strategies Familiarity with setting up observability frameworks, monitoring tools, and data quality checks across complex pipeline ecosystems Proficient in infrastructure-as-code (IaC) concepts and managing configuration/versioning across environments Awareness of data governance principles, including lineage, PII handling, and tenant-specific metadata tagging

Posted 22 hours ago

Apply

1.0 - 3.0 years

0 Lacs

India

On-site

We’re looking for a hands-on, product-minded full-stack developer with a strong interest in AI and automation . This role is ideal for someone who loves to build, experiment, and bring ideas to life — fast. You'll work closely with the founding team to prototype AI-powered tools and products from scratch.This is a highly AI-focused role where you will build tools powered by LLMs, workflow automation, and real-time data intelligence — not just build web apps, but create AI-first products . Location - Kochi, Bangalore | Years of experience - 1-3 Years Hire22. ai connects top talent with executive role s anonymously and confidential ly, transforming hiring through a n AI-first, instant CoNCT mod el. Companies ge t interview-ready candidates in just 22 hours . No telecalling, no spam, no manual filtering. Responsibilities Build and experiment with AI-first features powered by LLMs, embeddings, vector databases, and prompt-based workflows Fine-tune or adapt AI/ML models for specific use cases such as job matching, summarization, scoring, and classification Integrate and orchestrate AI capabilities using tools like Vertex AI, LangChain, Cursor, n8n, Flowise, etc. Work with vector databases and implement retrieval-augmented generation (RAG) patterns to build intelligent, context-aware AI applications. Design, build, and maintain full-stack web applications using Next.js and Python as supporting layers around core AI functionality Rapidly prototype ideas, test hypotheses, and iterate fast based on feedback Collaborate with product, design, and founders to transform internal ideas into deployable, AI-powered tools Building internal AI agents, assistants, or copilots Building tools for automated decision-making, resume/job matching, or workflow automation Skills Full-Stack Proficiency: Strong command of JavaScript/TypeScript with experience in modern frameworks like React or Next.js. Back-end experience with Python (FastAPI), orGo. Database Fluent: Comfortable working with both SQL (MySQL) and NoSQL databases (MongoDB, Redis), with good data modeling instincts. AI/ML First Mindset: Hands-on with integrating and optimizing AI models using frameworks like OpenAI, Hugging Face, LangChain, or TensorFlow. You understand LLM architecture, prompt engineering, embeddings, and AI orchestration tools. You’ve ideally built or experimented with AI-driven applications beyond just using APIs.. Builder Mentality: Passionate about product thinking and going from zero to one. You take ownership, work independently, and execute quickly without waiting for perfect clarity. Problem Solver: You break down complex problems, learn fast, and deliver clean, efficient solutions. You value both speed and quality. Communicator & Collaborator: You express your ideas clearly, ask good questions, and keep teams in sync by sharing progress and blockers openly.

Posted 22 hours ago

Apply

5.0 years

0 Lacs

Kochi, Kerala, India

Remote

ClockHash Technologies is looking for an experienced Senior Backend Developer with strong expertise in Python. You will be part of a dedicated R&D team from France, working on cutting-edge solutions that drive innovation in network management systems. Our dynamic team thrives on collaboration, autonomy, and continuous growth . Education: Bachelor’s degree in a relevant field (ICT, Computer Engineering, Computer Science, or Information Systems preferred). Experience Minimum 5+ years working with modern WebGUI technologies based on Python. Work Location: Bangalore Work mode: Hybrid, 2 days per week in the office Preferred Skills: Primary Technologies: Strong expertise in Python with a deep understanding of backend development, API design, and system architecture. Microservices & Cloud: Hands-on experience with microservices architecture, container-based deployments, and RESTful APIs . Deployment & Orchestration: Proficiency in using Helm for Kubernetes deployments. Operating Systems: Strong knowledge of Linux concepts Database: Experience with MySQL databases. Soft Skills: Autonomous, proactive, and curious personality. Strong communication and collaboration skills. Language: Fluency in English, both oral and written. Key Responsibilities Design, develop, and maintain web applications for large-scale data handling. Ensure application performance, security, and reliability. Develop and deploy microservices-based applications using containerization technologies. Ensure proper deployment of container-based applications with Docker Swarm or Kubernetes, providing necessary artifacts and documentation, and manage Kubernetes deployments using Helm. Work with RESTful APIs for seamless system integrations. Maintain and optimize MySQL database solutions. Participate in Agile processes, including sprint planning, code reviews, and daily stand-ups. Troubleshoot, debug, and enhance existing applications. Nice to Have Experience with modern Web UI frameworks and libraries. Familiarity with Laravel and other MVC frameworks for structured web development. Exposure to CI/CD pipelines and DevOps practices. Experience with cloud platforms like AWS, GCP, or Azure. Knowledge of message queue systems like RabbitMQ or Kafka. Knowledge of front-end technologies such as React or Vue.js . Networking: Familiarity with networking technologies is appreciated. What We Offer Friendly environment with good work-life balance. Opportunity to grow and visibility for your work. Health Insurance. Work from Home support (covering Internet Bill, Gym, or Recreational activities costs). Educational allowances (Certification fees reimbursement). Rich engagement culture with regular team events. ClockHash Technologies is an equal-opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, pregnancy, age, marital status, disability, or status as a protected veteran. Please note: The initial screening call will be conducted by our AI assistant.

Posted 22 hours ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies