Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Key Responsibilities Designed and developed scalable ETL pipelines using Cloud Functions, Cloud Dataproc (Spark), and BigQuery as the central data warehouse for large-scale batch and transformation workloads. Implemented efficient data modeling techniques in BigQuery (including star/snowflake schemas, partitioning, and clustering) to support high-performance analytics and reduce query costs. Built end-to-end ingestion frameworks leveraging Cloud Pub/Sub and Cloud Functions for real-time and event-driven data capture. Used Apache Airflow (Cloud Composer) for orchestration of complex data workflows and dependency management. Applied Cloud Data Fusion and Datastream selectively for integrating specific sources (e.g., databases and legacy systems) into the pipeline. Developed strong backtracking and troubleshooting workflows to quickly identify data issues, job failures, and pipeline bottlenecks, ensuring consistent data delivery and SLA compliance. Integrated robust monitoring, alerting, and logging to ensure data quality, integrity, and observability. Tech stack GCP: BigQuery, Cloud Functions, Cloud Dataproc (Spark), Pub/Sub, Data Fusion, Datastream Orchestration: Apache Airflow (Cloud Composer) Languages: Python, SQL, PySpark Concepts: Data Modeling, ETL/ELT, Streaming & Batch Processing, Schema Management, Monitoring & Logging Some of the most important data sources: (need to know ingestion technique on these) CRM Systems (cloud-based and internal) Salesforce Teradata MySQL API Other 3rd-party and internal operational systems Skills: etl/elt,cloud data fusion,schema management,sql,pyspark,cloud dataproc (spark),monitoring & logging,data modeling,bigquery,etl,cloud pub/sub,python,gcp,bigquerry,streaming & batch processing,datastream,cloud functions,spark,apache airflow (cloud composer)
Posted 1 day ago
2.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Summary The embedded software quality test engineer is part of a research and development team responsible for designing and testing software for industrial control applications primarily for the electrical transmission and distribution industry. Product Testing include a variety of automated, manual and simulation procedures designed to validate the quality and performance of the products in line with design and industry requirements Job Description Essential Responsibilities Be part of an agile development team that develops embedded software applications. Familiarize with GE controllers and develop good understanding on their functionality. Collaborate with development and system teams to test containerized microservices (Docker, Kubernetes) in complex simulation environments. Own and execute test cases for each requirement as part of an agile iteration schedule. Identify and ensure requirements traceability to test cases. Identify and report defects detected during testing Assist in prioritization of reported defects and work with software developers to facilitate timely closure Verify resolution of resolved defects Record and report test results in an effective manner. Design functional verification test plans to validate performance, boundary and negative testing Qualifications /Requirements Bachelors degree in STEM Minimum 2 years of experience in software development and test, SCADA communications or system integration for control systems. Knowledge in basic electronic engineering fundamentals, Electrical protection, substation automation and SCADA. Ability to learn and apply test tools such as protocol Analyzer, software simulation applications, device configuration tools. Able to work both as part of a team and independently utilizing agile execution tools Familiarity with Substation Automation and SCADA applications and protocols Understanding of utility / SCADA communication protocols concepts, networking and interaction between Intelligent Electronic Devices Hands on with systems designed based on Industrial communication protocols, technologies and standards such as DNP3, Modbus, IEC 60870, IEC 61850, IEEE 1588, Ethernet communications and cyber security Hands-on experience with container technologies (e.g., Docker) and orchestration tools (e.g., Kubernetes). Desired Characteristics Capacity to listen, understand and synthesize end-user requirements in a multi-cultural environment. Organized ability to multi-task and stay organized. High energy, self-starter, with a proven track record in delivering results. Establishes a sense of urgency to complete tasks in an efficient, timely, and effective manner. Strong team player, able to foster good working relationships with other functional areas. Familiar with fundamental program tools and processes. Strong problem-solving skills Ability to work independently Strong oral and written communication skills. Familiarity with Substation Automation and SCADA applications and protocols will be an asset Understanding of utility / SCADA communication protocols concepts, networking and interaction between Intelligent Electronic Devices will be an asset Experience with industrial applications will be an asset Experience in validating and troubleshooting software within containerized or virtualized environments will be an asset. Additional Information Relocation Assistance Provided: Yes
Posted 1 day ago
1.0 - 5.0 years
0 Lacs
India
On-site
We are hiring on behalf of a leading Indian unicorn looking for a talented DevOps Engineer with 1-5 years of experience. In this role, you will be the backbone of the engineering team, responsible for building and maintaining a scalable, reliable, and secure cloud infrastructure. If you are passionate about automation, infrastructure as code, and building robust CI/CD pipelines, this is your opportunity to make a massive impact. What You'll Do (Your Responsibilities): Cloud Infrastructure: Design, build, and manage scalable and secure cloud infrastructure on AWS, GCP, or Azure. Automation: Implement and maintain robust CI/CD pipelines using tools like Jenkins, GitLab CI, or CircleCI to automate testing and deployment processes. Containerization: Utilize containerization and orchestration tools (Docker, Kubernetes) to manage microservices architecture effectively. Infrastructure as Code (IaC): Champion and implement IaC practices using tools like Terraform or CloudFormation to ensure infrastructure is versioned and reproducible. Monitoring & Reliability: Monitor application performance and infrastructure health, ensuring high availability, reliability, and rapid incident response. What We're Looking For (Your Qualifications): Experience: 1-5 years of hands-on experience in a DevOps, SRE, or Cloud Engineering role. Core Skills: Strong experience with at least one major cloud provider ( AWS preferred), Docker , Kubernetes , and CI/CD tools. IaC: Proficiency with Infrastructure as Code tools like Terraform or CloudFormation. Scripting: Strong scripting skills in languages like Bash, Python, or Go. Systems Knowledge: Solid understanding of Linux/Unix administration, networking concepts, and security best practices. Proactive Mindset: A proactive approach to identifying and resolving potential issues before they impact production. Our Unique Application Process: To fast-track your application directly to hiring managers, we use a two-step process: Submit Your Resume: On the Portal. AI-Powered Interview: You will be invited to a short, recorded video interview. This is your chance to showcase your skills and personality beyond your resume.
Posted 1 day ago
4.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Summary Position Summary Job title: Azure Cloud Security Engineer (Senior Consultant) About At Deloitte, we do not offer you just a job, but a career in the highly sought-after risk Management field. We are one of the business leaders in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Deloitte’s clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security. By joining us, you will get to work with diverse teams of professionals who design, manage, and implement risk-centric solutions across a variety of domains. In the process, you will gain exposure to the risk-centric challenges faced in today’s world by organizations across a range of industry sectors and become subject matter experts in those areas. Our Risk and Financial Advisory services professionals help organizations effectively navigate business risks and opportunities—from strategic, reputation, and financial risks to operational, cyber, and regulatory risks—to gain competitive advantage. We apply our experience in ongoing business operations and corporate lifecycle events to help clients become stronger and more resilient. Our market-leading teams help clients embrace complexity to accelerate performance, disrupt through innovation, and lead in their industries. We use cutting-edge technology like AI/ML techniques, analytics, and RPA to solve Deloitte’s clients ‘most complex issues. Working in Risk and Financial Advisory at Deloitte US-India offices has the power to redefine your ambitions. The Team Cyber & Strategic Risk We help organizations create a cyber-minded culture, reimagine risk to uncover strategic opportunities, and become faster, more innovative, and more resilient in the face of ever-changing threats. We provide intelligence and acuity that dynamically reframes risk, transcending a manual, reactive paradigm. The cyber risk services—Identity & access management (IAM) practice helps organizations in designing, developing, and implementing industry-leading IAM solutions to protect their information and confidential data, as well as help them build their businesses and supporting technologies to be more secure, vigilant, and resilient. The IAM team delivers service to clients through following key areas: User provisioning Access certification Access management and federation Entitlements management Work you’ll do As a Cloud Security Engineer, you will be at the front lines with our clients supporting them with their Cloud Cyber Risk needs: Executing on cloud security engagements across the lifecycle – assessment, strategy, design, implementation, and operations. Performing technical health checks for cloud platforms/environments prior to broader deployments. Assisting in the selection and tailoring of approaches, methods and tools to support cloud adoption, including for migration of existing workloads to a cloud vendor. Designing and developing cloud-specific security policies, standards and procedures. e.g., user account management (SSO, SAML), password/key management, tenant management, firewall management, virtual network access controls, VPN/SSL/IPSec, security incident and event management (SIEM), data protection (DLP, encryption). Documenting all technical issues, analysis, client communication, and resolution. Supporting proof of concept and production deployments of cloud technologies. Assisting clients with transitions to cloud via tenant setup, log processing setup, policy configuration, agent deployment, and reporting. Operating across both technical and management leadership capacities. Providing internal technical training to Advisory personnel as needed. Performing cloud orchestration and automation (Continuous Integration and Continuous Delivery (CI/CD)) in single and multi-tenant environments using tools like Terraform, Ansible, Puppet, Chef, Salt etc. Experience with multiple security technologies like CSPM, CWPP, WAF, CASB, IAM, SIEM, etc. Required Skills 4+ years of information technology and/or information security operations experience. Ideally 2+ years of working with different Cloud platforms (SaaS, PaaS, and IaaS) and environments (Public, Private, Hybrid). Familiarity with the following will be considered a plus: Solid understanding of enterprise-level directory and system configuration services (Active Directory, SCCM, LDAP, Exchange, SharePoint, M365) and how these integrate with cloud platforms Solid understanding of cloud security industry standards such as Cloud Security Alliance (CSA), ISO/IEC 27017 and NIST CSF and how they help in compliance for cloud providers and cloud customers Hands-on technical experience implementing security solutions for Microsoft Azure Knowledge of cloud orchestration and automation (Continuous Integration and Continuous Delivery (CI/CD)) in single and multi-tenant environments using tools like Terraform, Ansible, Puppet, Chef, Salt etc. Knowledge of cloud access security broker (CASB) and cloud workload protection platform (CWPP) technologies Solid understanding of OSI Model and TCP/IP protocol suite and network segmentation principles and how these can be applied on cloud platforms Preferred: Previous Consulting or Big 4 experience. Hands-on experience with Azure, plus any CASB or CWPP product or service. Understanding of Infrastructure-as-Code, and ability to create scripts using Terraform, ARM, Ansible etc. Knowledge of scripting languages (PowerShell, JSON, .NET, Python, Javascript etc.) Qualification Bachelor’s Degree required.Ideally in Computer Science, Cyber Security, Information Security, Engineering, Information Technology. How You’ll Grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India . Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. Deloitte is committed to achieving diversity within its workforce, and encourages all qualified applicants to apply, irrespective of gender, age, sexual orientation, disability, culture, religious and ethnic background. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with Deloitte’s clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Recruiting tips Finding the right job and preparing for the recruitment process can be tricky. Check out tips from our Deloitte recruiting professionals to set yourself up for success. Check out recruiting tips from Deloitte recruiters . Benefits We believe that to be an undisputed leader in professional services, we should equip you with the resources that can make a positive impact on your well-being journey. Our vision is to create a leadership culture focused on the development and well-being of our people. Here are some of our benefits and programs to support you and your family’s well-being needs. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you . Our people and culture Our people and our culture make Deloitte a place where leaders thrive. Get an inside look at the rich diversity of background, education, and experiences of our people. What impact will you make? Check out our professionals’ career journeys and be inspired by their stories. Professional development You want to make an impact. And we want you to make it. We can help you do that by providing you the culture, training, resources, and opportunities to help you grow and succeed as a professional. Learn more about our commitment to developing our people . © 2023. See Terms of Use for more information. Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ("DTTL"), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as "Deloitte Global") does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the "Deloitte" name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see www.deloitte.com/about to learn more about our global network of member firms. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 306468
Posted 1 day ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
- - - - - - - - - - - - Collaboration Solution Engineer is an individual who works in close co-ordination with Global engineering team. An Engineer is responsible for managing and maintaining O365 infrastructure and associated operational services. The position requires a candidate with work on Collaboration domain related projects, works as L3 engineer and be a part of “Scripting Factory” team to integrate new/ongoing scripting/automation initiative. An ideal candidate will ensure that the designed solution and evolutions meet functional and non-functional requirements such as availability, performance, security, and maintainability. You will work in collaboration with the Solution engineering team to ensure that the good quality of the developments as well as the scripts delivery/releases pipeline is as efficient as possible. You will also participate in the technological development of the platform by integrating the latest innovations into the automations that make it up. Additional responsibilities include conducting studies of system usage, making recommendations for improvements to the usability of automated tools, and identifying opportunities for increased adoption of orchestration technologies. Skills 7+ years of design and project implementation experience in Office 365 using enterprise systems management tools. Candidate needs to have strong knowledge across following skills - Mandatory Skills: Microsoft PowerShell, PowerBI, PowerAutomate Microsoft O365 Tenant Administration Portal Exchange Online SharePoint One Drive for Business Teams Yammer Preferred Skills: Mimecast, Avepoint Shifts / Work timings: European Shift (Summer – 11:30am to 8:30pm) & (Winter – 12:30pm to 9:30pm) Additional Technical Skills Familiarity or experience with workplace technologies Messaging Technologies Microsoft Exchange Office Intune Azure Active Directory O365Collaboration and End User Productivity Microsoft Office SharePoint Yammer Delve OneDrive Special Skills / Certifications / Requirements If Any Possession of or working towards Microsoft Office 365 Certifications ITIL V3/V4 Foundation Expectation 3+ years of advanced experience with Microsoft PowerShell Requires experience in the areas of solution engineering, design, planning, monitoring, and alerting, system security, system upgrades, and enterprise backup and recovery. Requires experience in the areas of solution engineering and design, asset management, change management, capacity planning, monitoring, and alerting, system security, system upgrades, patch management, and enterprise backup and recovery. Well-versed about best ways to technically manage and maintain O365 environments Experience in deploying Office 365 on Win 10 and above across an enterprise Design and deployment of Microsoft cloud technologies, i.e. Azure AD, EMS, RMS, and OMS. Proven track record of automating, deploying, and maintaining enterprise level solution environments on secure 0365 platforms. Good development experience in – MS Power Suite (PowerShell, PowerApps, Power Automate, PowerBI), Gitlab, Jenkins, and application platform deployment automation Assist with Scrum Team estimation of stories and sizing of effort to include the representation of the test automation and engagement with Continuous Integration (CI) required Establish, grow and drive strategic relationships with the internal stakeholders. Map business scenarios to technology solutions, manage technical deployment challenges Enable technical features that drive consumption of the deployed service. Duties And Responsibilities Develops, documents, and enforces the standards, security procedures, and controls for access to ensure integrity of the Office 365/Exchange related systems. Maintain uptime by pro-active management and monitoring to ensure environment health, and minimize disruptions to mail-flow Ensure awareness of and support adoption of M365 roadmap and new tools and capabilities and potential use within Michelin Ability to translate technical issues into understandable business language for end users. Leads initiatives for researching and deploying new applications Deploy advanced Microsoft 365 Services including Enterprise and Mobility Provide guidance and leadership as a Senior staff member of the team Assist in managing Support Services and related deliverables Excellent decision making and critical-thinking skills Excellent organizational and communication skills are required To meet deadlines with Projects and Assignments To learn and support new technologies and train others Be the code expert and the technical reference of the team and propose new script solutions to meet the business needs. Identify opportunities to innovate, extend and enhance engineering activities everywhere possible. Maintain the scripts and the automation platform in working order or quickly restore it to working order in the event of a failure. Work closely with partners and internal teams to ensure that the platform meets security, SLA and performance requirements. Writing, updating and use of documentation. Debug complex problems and create solid solutions. Sponsor good software development practices - including adherence to Michelin chosen software development methodology (Agile), standard setting. Continuously test the resiliency of scripts and infrastructures under various error conditions. Must be able to work in a team environment with a can-do attitude capable of overcoming difficult challenges. Troubleshooting and ability to analyze technical problems to prevent future occurrence. Key Expected Achievements The road map of the expertise domain is created and communicated to stakeholders . The standards and Framework are : Built Deployed Supported Checking actions and capitalization of good practices are realized. Build and monitor the Obsolescence treatment plan of the expertise domain . Provide necessary assistance to project or support teams Soft Skills Strong Team Player Ability to work in an Agile framework An excellent reputation for support to end-users and leading teams Energetic, highly motivated self-starter and have a positive attitude Detail oriented, able to clearly communicate ideas and work as part of a team Good written and verbal communication skills to co-ordinate tasks with other teams Ability to multi-task and handle multiple priorities Strong interpersonal skills Strong attention to details Ability to quickly adapt to changes Enthusiastic, cooperative, and positive behavior Creative, thinking outside of the box, eager to learn and truly committed to the success of the company
Posted 1 day ago
7.0 years
0 Lacs
Pune, Maharashtra, India
On-site
- - - - - - - - - - - - Collaboration Solution Engineer is an individual who works in close co-ordination with Global engineering team. An Engineer is responsible for managing and maintaining O365 infrastructure and associated operational services. The position requires a candidate with work on Collaboration domain related projects, works as L3 engineer and be a part of “Scripting Factory” team to integrate new/ongoing scripting/automation initiative. An ideal candidate will ensure that the designed solution and evolutions meet functional and non-functional requirements such as availability, performance, security, and maintainability. You will work in collaboration with the Solution engineering team to ensure that the good quality of the developments as well as the scripts delivery/releases pipeline is as efficient as possible. You will also participate in the technological development of the platform by integrating the latest innovations into the automations that make it up. Additional responsibilities include conducting studies of system usage, making recommendations for improvements to the usability of automated tools, and identifying opportunities for increased adoption of orchestration technologies. Duties And Responsibilities Develops, documents, and enforces the standards, security procedures, and controls for access to ensure integrity of the Office 365/Exchange related systems. Maintain uptime by pro-active management and monitoring to ensure environment health, and minimize disruptions to mail-flow Ensure awareness of and support adoption of M365 roadmap and new tools and capabilities and potential use within Michelin Ability to translate technical issues into understandable business language for end users. Leads initiatives for researching and deploying new applications Deploy advanced Microsoft 365 Services including Enterprise and Mobility Provide guidance and leadership as a Senior staff member of the team Assist in managing Support Services and related deliverables Excellent decision making and critical-thinking skills Excellent organizational and communication skills are required To meet deadlines with Projects and Assignments To learn and support new technologies and train others Be the code expert and the technical reference of the team and propose new script solutions to meet the business needs. Identify opportunities to innovate, extend and enhance engineering activities everywhere possible. Maintain the scripts and the automation platform in working order or quickly restore it to working order in the event of a failure. Work closely with partners and internal teams to ensure that the platform meets security, SLA and performance requirements. Writing, updating and use of documentation. Debug complex problems and create solid solutions. Sponsor good software development practices - including adherence to Michelin chosen software development methodology (Agile), standard setting. Continuously test the resiliency of scripts and infrastructures under various error conditions. Must be able to work in a team environment with a can-do attitude capable of overcoming difficult challenges. Troubleshooting and ability to analyze technical problems to prevent future occurrence. Additional Technical Skills Familiarity or experience with workplace technologies Messaging Technologies Microsoft Exchange Office Intune Azure Active Directory O365Collaboration and End User Productivity Microsoft Office SharePoint Yammer Delve OneDrive Skills 7+ years of design and project implementation experience in Office 365 using enterprise systems management tools. Candidate needs to have strong knowledge across - Microsoft PowerShell, PowerBI, PowerAutomate Microsoft O365 Tenant Administration Portal Exchange Online SharePoint One Drive for Business Teams Yammer 3+ years of advanced experience with Microsoft PowerShell Requires experience in the areas of solution engineering, design, planning, monitoring, and alerting, system security, system upgrades, and enterprise backup and recovery. Requires experience in the areas of solution engineering and design, asset management, change management, capacity planning, monitoring, and alerting, system security, system upgrades, patch management, and enterprise backup and recovery. Well-versed about best ways to technically manage and maintain O365 environments Experience in deploying Office 365 on Win 10 and above across an enterprise Design and deployment of Microsoft cloud technologies, i.e. Azure AD, EMS, RMS, and OMS. Proven track record of automating, deploying, and maintaining enterprise level solution environments on secure 0365 platforms. Good development experience in – MS Power Suite (PowerShell, PowerApps, Power Automate, PowerBI), Gitlab, Jenkins, and application platform deployment automation Assist with Scrum Team estimation of stories and sizing of effort to include the representation of the test automation and engagement with Continuous Integration (CI) required Establish, grow and drive strategic relationships with the internal stakeholders. Map business scenarios to technology solutions, manage technical deployment challenges Enable technical features that drive consumption of the deployed service. Special Skills / Certifications / Requirements If Any Possession of or working towards Microsoft Office 365 Certifications ITIL V3/V4 Foundation Soft Skills Strong Team Player Ability to work in an Agile framework An excellent reputation for support to end-users and leading teams Energetic, highly motivated self-starter and have a positive attitude Detail oriented, able to clearly communicate ideas and work as part of a team Good written and verbal communication skills to co-ordinate tasks with other teams Ability to multi-task and handle multiple priorities Strong interpersonal skills Strong attention to details Ability to quickly adapt to changes Enthusiastic, cooperative, and positive behavior Creative, thinking outside of the box, eager to learn and truly committed to the success of the company KEY EXPECTED ACHIEVEMENTS The road map of the expertise domain is created and communicated to stakeholders . The standards and Framework are : Built Deployed Supported Checking actions and capitalization of good practices are realized. Build and monitor the Obsolescence treatment plan of the expertise domain . Provide necessary assistance to project or support teams
Posted 1 day ago
5.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Senior AI/ML Engineer Company Overview At Jacav, we help businesses streamline and manage their DevOps and Microservices operations with ease. Our product, Cloud Dongle, simplifies deployment, monitoring, and management across cloud environments. Designed to support modern software development teams, it automates workflows, improves visibility, and enables faster, more reliable releases. Jacav offers tools for CI/CD, infrastructure-as-code, and real-time monitoring—all in one place. Join the growing number of teams using Jacav to take control of their DevOps and Microservices lifecycle. Position Overview We are seeking a Senior AI/ML Engineer to join our innovative team and help drive the next generation of intelligent DevOps and Microservices solutions. This role offers an exciting opportunity to work at the intersection of artificial intelligence and cloud infrastructure, developing cutting-edge AI capabilities that enhance our Cloud Dongle platform. Experience Required 5-10 years of professional experience in AI/ML engineering and software development Key Responsibilities • AI/ML Model Development : Design, develop, and deploy machine learning models to optimize DevOps workflows and enhance microservices management • Generative AI Implementation : Build and integrate generative AI solutions to automate complex operational tasks and improve user experiences • Agentic AI Systems : Develop autonomous AI agents that can make intelligent decisions within DevOps pipelines and infrastructure management • Cloud Infrastructure : Architect and implement scalable AI/ML solutions on AWS cloud platforms • API Development : Create robust, high-performance APIs using FastAPI to serve AI models and integrate with existing systems • Model Training & Optimization : Utilize TensorFlow and other frameworks to train, fine-tune, and optimize machine learning models • LLM Integration : Work with leading language models including OpenAI and Claude to enhance platform capabilities • Cross-functional Collaboration : Partner with DevOps, backend, and product teams to integrate AI solutions seamlessly into the Cloud Dongle ecosystem Required Technical Skills • Programming Languages : Advanced proficiency in Python • Machine Learning Frameworks : Strong experience with TensorFlow and modern ML libraries • Generative AI : Hands-on experience with generative AI technologies and applications • Cloud Platforms : Extensive experience with AWS services and cloud-native AI/ML solutions • Agentic AI : Experience developing autonomous AI systems and intelligent agents • LLM Integration : Practical experience with OpenAI, Claude, and other large language models • API Development : Proficiency in FastAPI and RESTful API design • DevOps Integration : Understanding of CI/CD pipelines, containerization, and infrastructure automation Preferred Qualifications • Experience with MLOps practices and model deployment at scale • Knowledge of microservices architecture and distributed systems • Familiarity with Kubernetes and container orchestration • Experience with real-time data processing and streaming • Background in infrastructure monitoring and observability • Understanding of cloud security best practices for AI/ML workloads What We Offer • Opportunity to work on cutting-edge AI/ML technologies in the DevOps space • Collaborative environment with cross-functional teams • Chance to impact the future of cloud infrastructure management • Competitive compensation and benefits package • Professional development and learning opportunities Join Jacav and help us revolutionize how teams manage their DevOps and Microservices operations through the power of artificial intelligence!
Posted 1 day ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Hi Role - Azure Data Engineer Location - Chennai, Gurugram (Onsite 3days in a week) Shift Timing - 2pm to 11PM Experience - 3+ Notice Period - Immediate or 15days Notice period (Please don't apply More than 30days) Required Skills and Qualifications: Educational Background: Bachelor’s or Master’s degree in Computer Science, Information Technology, Data Science, or a related field. Certifications in Databricks, Azure, or related technologies are a plus. Technical Skills: o Proficiency in SQL for complex queries, database design, and optimization. o Strong experience with PySpark for data transformation and processing. o Hands-on experience with Databricks for building and managing big data solutions. o Familiarity with cloud platforms like Azure INNOVATION STARTS HERE o Knowledge of data warehousing concepts and tools (e.g., Snowflake, Redshift). o Experience with data versioning and orchestration tools like Git, Airflow, or Dagster. Solid understanding of Big Data ecosystems (Hadoop, Hive, etc.).
Posted 1 day ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
1. Programming Languages & Frameworks (Java 18, Spring Boot) 2. API Development (RESTful APIs, GraphQL, OpenAPI/Swagger) 3. Databases & ORM (PostgreSQL, MySQL, MongoDB, Hibernate, JPA) 4. CI/CD Pipelines (Jenkins, GitLab CI/CD, GitHub Actions) 5. Containerization & Orchestration (Docker, Kubernetes) 6. Cloud Platforms (Azure) 7. Monitoring & Logging (Prometheus, Grafana, ELK Stack, Splunk) 8. Testing Frameworks (JUnit, TestNG, Mockito, WireMock) 9. Messaging & Integration (Kafka, REST, SOAP) 10. Security & Authentication (OAuth2, JWT, Spring Security)
Posted 1 day ago
8.0 years
0 Lacs
India
Remote
Compensation: Up to $300,000 USD annually + Equity Type: Full-time | Remote A cutting-edge AI company is seeking a Principal Python Backend Engineer to lead backend architecture for distributed AI systems. You’ll design scalable infrastructure supporting real-time inference, model management, and AI application workflows. Responsibilities: Architect and scale distributed backend systems for AI platforms Build performant APIs and orchestration layers powering ML pipelines Lead and mentor a team of backend engineers Collaborate closely with ML, data, and product teams Requirements: 8+ years of backend experience in Python (FastAPI, Django, Flask) Strong expertise in distributed systems, async processing, and system performance Deep familiarity with PostgreSQL, Redis, Celery, and cloud-native architecture Bonus: Background in AI infra, LLM ops, or vector DBs
Posted 1 day ago
5.0 years
0 Lacs
India
Remote
Compensation: Up to $300,000 USD annually + Equity Type: Full-time | Remote We're looking for a Senior Python Backend Engineer to help build infrastructure powering real-time AI workflows and ML pipelines. You’ll work on high-availability systems and collaborate with a global team of engineers and researchers. Responsibilities: Build and optimize backend APIs and services for AI products Integrate with task queues, model inference APIs, and vector search systems Ensure system reliability and scalability across deployments Requirements: 5+ years of Python backend development Experience with FastAPI or Django, PostgreSQL, Celery, Redis Familiarity with LLM infrastructure, data orchestration, or AI tools is a big plus Strong problem-solving, documentation, and testing practices
Posted 1 day ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
What you'll do Manage system(s) uptime across cloud-native (AWS, GCP) and hybrid architectures. Build infrastructure as code (IAC) patterns that meet security and engineering standards using one or more technologies (Terraform, scripting with cloud CLI, and programming with cloud SDK). Build CI/CD pipelines for build, test and deployment of application and cloud architecture patterns, using platform (Jenkins) and cloud-native toolchains. Build automated tooling to deploy service requests to push a change into production. Build runbooks that are comprehensive and detailed to manage detect, remediate and restore services. Solve problems and triage complex distributed architecture service maps. On call for high severity application incidents and improving run books to improve MTTR Lead availability blameless postmortem and own the call to action to remediate recurrences. What experience you need BS degree in Computer Science or related technical field involving coding (e.g., physics or mathematics), or equivalent job experience required 5-7 years of experience in software engineering, systems administration, database administration, and networking 2+ years of experience developing and/or administering software in public cloud Cloud Certification Strongly Preferred Proficiency with continuous integration and continuous delivery tooling and practices System administration skills, including automation and orchestration of Linux/Windows using Terraform, Chef, Ansible and/or containers (Docker, Kubernetes, etc.) Demonstrable cross-functional knowledge with systems, storage, networking, security and databases Experience in languages such as Python, Bash, Java, Go JavaScript and/or node.js Experience in monitoring infrastructure and application uptime and availability to ensure functional and performance objectives What could set you apart You have expertise designing, analyzing and troubleshooting large-scale distributed systems. You take a system problem-solving approach, coupled with strong communication skills and a sense of ownership and drive Kubernetes (CKA, CKAD) or cloud certifications. You are passionate for automation with a desire to eliminate toil whenever possible You’ve built software or maintained systems in a highly secure, regulated or compliant industry You thrive in and have experience and passion for working within a DevOps culture and as part of a team BS in Computer Science or related field. 2+ years of experience developing and/or administering software in public cloud 5+ years of programming experience (Python, Bash/Shell Script, Java, Go, etc.). 3+ years of experience monitoring infrastructure and application performance. 5+ years experience of system administration skills, including automation and orchestration of Linux/Windows using Terraform, Chef, Ansible and/or containers (Docker, Kubernetes, etc.) 5+ years experience working with continuous integration and continuous delivery tooling and practices Kubernetes: Design, deploy, and manage production-ready Kubernetes clusters. Cloud Infrastructure: Build and maintain scalable infrastructure on GCP using tools like Terraform. Performance: Identify and resolve performance bottlenecks in applications and infrastructure. Observability: Implement monitoring and logging to proactively detect and resolve issues. Incident Response: Participate in on-call rotations, troubleshooting and resolving production incidents. Collaboration: Promote reliability best practices and ensure smooth deployments. Automation: Build CI/CD pipelines, automated tooling, and runbooks. Problem Solving: Triage complex issues, lead blameless postmortems, and drive remediation. Mentorship: Guide and mentor other SREs.
Posted 1 day ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
What you’ll do? Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What experience you need? Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in Java & SQL 2+ years experience with Cloud technology: GCP, AWS, or Azure 2+ years experience designing and developing cloud-native solutions 2+ years experience designing and developing microservices using Java, SpringBoot, GCP SDKs, GKE/Kubernetes 3+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs What could set you apart? Knowledge or experience with Apache Beam for stream and batch data processing. Familiarity with big data tools and technologies like Apache Kafka, Hadoop, or Spark. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to data visualization tools or platforms.
Posted 1 day ago
0 years
0 Lacs
Saket, Delhi, India
On-site
Roles and Responsibilities: ● Design, develop, and maintain critical software in a fast-paced quality-conscious environment ● Quickly understand complex systems/code and own key pieces of the system, including the delivered quality ● Diagnose and troubleshoot complex problems in a distributed computing environment ● Work alongside other Engineers and cross functional teams to diagnose/troubleshoot any production performance related issues ● Work in Python, Shell and built systems on Docker ● Defining and setting development, test, release, update, and support processes for DevOps operation ● Identifying and deploying cybersecurity measures by continuously performing vulnerability assessment and risk management ● Strive for continuous improvement and build continuous integration, continuous development, and constant deployment pipeline (CI/CD Pipeline) ● Managing periodic reporting on the progress to the management and the customer Skills: ● Familiarity with scripting languages: python, shell scripting ● Proper understanding of networking and security protocols (HTTPS, SSL, Certs) ● Experience in building containers and container orchestration applications (K8S/ECS/ Docker) ● Experience working on Linux based infrastructure, GIT, CI/CD Tools, Jenkins, Terraform ● Configuration and managing databases such as MySQL, PostgreSQL, Mongo ● Working knowledge of various tools, open-source technologies, and cloud services (AWS preferably)
Posted 1 day ago
2.0 years
0 Lacs
Bhilai, Chhattisgarh, India
On-site
Job Summary: We are seeking an experienced DevOps Engineer with a strong background in deploying and managing AI applications on Azure. The ideal candidate should have experience in deploying AI systems, understands AI Agentic architectures, and can optimize and manage LLM-based applications in production environments. Key Responsibilities: Deploy, scale, and monitor AI applications on Microsoft Azure (AKS, Azure Functions, App Services, etc.). Build and optimize AI Agentic systems for robust and efficient performance. Implement CI/CD pipelines for seamless updates and deployments. Manage containerized services using Docker/Kubernetes. Monitor infrastructure cost, performance, and uptime. Collaborate with AI engineers to understand application requirements and support smooth deployment. Ensure compliance with data security and privacy standards. Requirements: 2+ years of experience in deploying and managing AI/ML applications. Proficiency in Azure cloud services and DevOps practices. Familiarity with LLM-based systems, LangChain, Vector DBs, and Python. Experience with containerization tools (Docker) and orchestration (Kubernetes). Understanding of AI system architecture, including Agentic workflows. Strong problem-solving and optimization skills. Preferred Qualifications: Experience with Gemini, OpenAI, Anthropic, or Hugging Face APIs. Familiarity with LangChain, LlamaIndex, or ChromaDB. Prior experience in managing high-availability, secure, and cost-optimized AI deployments.
Posted 1 day ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Senior Engineer - Cloud Operations (Platform Support)As a Cloud Operations Engineer in our Cloud Operations Center, you will be a key player in ensuring the 24x7x365 smooth operation of Saviynt’s Enterprise Identity Cloud. This role focuses on maintaining the stability, performance, and reliability of our platform with a strong emphasis on application layer support and operational ownership. You will be working closely with other operations team members, development, and engineering to resolve issues, implement improvements, and provide exceptional support. This is an opportunity for someone who enjoys operational challenges and problem-solving in a dynamic cloud environment and wants to see their work through to completion. WHAT YOU WILL BE DOING · Strong pod-level troubleshooting skills in AKS/EKS (not just restarting pods). · Analyze application and DB (RDS, MySQL) performance issues.Deeply investigate and analyze application performance issues (Java, Grails, Hibernate), identifying root causes and implementing solutions. · Oversee the monitoring of our SaaS applications and underlying infrastructure (Kubernetes on AWS and Azure, VPN connections, customer applications, Elastic Search, MySQL) for alerts and performance issues. · Strong understanding of basic computing concepts like DNS, IP addressing, Networking, and LDAP. · Effectively participate and contribute in on-call escalations with a strong operational mindset and provide technical guidance during critical incidents. · Proactively communicate with customers on technical issues when required. · Ability to guide junior engineers when needed technically. · Manage the full lifecycle of alerts, incidents, and service requests reported through FreshService, ensuring timely and accurate logging, prioritization, resolution, and escalation. · Develop, implement, and maintain operational procedures, runbooks, and knowledge base articles to standardize incident resolution and service request fulfillment. · Drive continuous improvement initiatives to optimize operational efficiency, reduce incident rates, and improve service request turnaround times. · Collaborate with backend engineering and development teams to troubleshoot complex issues, identify root causes, and implement preventative measures. · Ensure adherence to defined SLAs (Service Level Agreements) and KPIs (Key Performance Indicators) for operational performance.Maintain operational documentation, including system diagrams, contact lists, and escalation paths. · Ensure compliance with relevant security and compliance policies. · Plan and coordinate scheduled maintenance activities with minimal impact to service availability. WHAT YOU BRING · Bachelor's degree in Computer Science, Information Technology, Engineering, or a related field. · Minimum of 3+ years of experience in IT/Cloud operations and application support (specifically Java apps), with knowledge of cloud infrastructure (AWS and Azure). · Strong experience with application support (Java, Grails, Hibernate) and performance analysis in a production environment, able to pinpoint a performance degradation through analysis. · Strong understanding of cloud computing concepts, architectures, and services on both AWS and Azure platforms. · Working knowledge of containerization and orchestration technologies, specifically Kubernetes.End-to-end technical accountability and operational ownership.Willingness to work in a 24/7 operating model. · Experience managing and troubleshooting network connectivity, including VPNs and connections to external networks. · Familiarity with monitoring tools and practices, with experience in setting up and responding to alerts. · Hands-on experience with log management and analysis tools, preferably Elastic Search. · Working knowledge of database systems, preferably MySQL, including L2 troubleshooting and performance monitoring. · Experience with ITSM (IT Service Management) systems, preferably FreshService, including incident, problem, and service request management processes. · Excellent problem-solving, analytical, and troubleshooting skills with a data-driven approach.Experience with Grafana systems and dashboards is a plus. · Strong communication (written and verbal), interpersonal, and presentation skills. · Ability to work effectively under pressure and manage multiple priorities in a fast-paced environment. · Experience in developing and documenting operational procedures and runbooks. · Experience with automation tools and scripting languages (e.g., Python, Bash) is a plus. · Experience working in a SaaS environment is highly desirable. · Working knowledge of database systems, preferably MySQL, including L2 troubleshooting and performance monitoring. · Experience with ITSM (IT Service Management) systems, preferably FreshService, including incident, problem, and service request management processes. · Excellent problem-solving, analytical, and troubleshooting skills with a data-driven approach.Experience with Grafana systems and dashboards is a plus. · Strong communication (written and verbal), interpersonal, and presentation skills. · Ability to work effectively under pressure and manage multiple priorities in a fast-paced environment. · Experience in developing and documenting operational procedures and runbooks. · Experience with automation tools and scripting languages (e.g., Python, Bash) is a plus. · Experience working in a SaaS environment is highly desirable. We offer you a competitive total rewards package, learning and tremendous opportunities to grow and advance in your career. At Saviynt, it is not typical for an individual to be hired at or near the top of the range for their role and final compensation decisions are dependent on many factors including, but are not limited to location; skill sets; experience and training; licensure and certifications; and other relevant business and organizational needs. A reasonable estimate of the current range is $Min,000 - $Max,000 annually. You may also be eligible to participate in a Saviynt discretionary bonus plan, subject to the rules governing the program, whereby an award, if any, depends on various factors, including, without limitation, individual and organizational performance.If required for this role, you will:Complete security & privacy literacy and awareness training during onboarding and annually thereafterReview (initially and annually thereafter), understand, and adhere to Information Security/Privacy Policies and Procedures such as (but not limited to): > Data Classification, Retention & Handling Policy> Incident Response Policy/Procedures> Business Continuity/Disaster Recovery Policy/Procedures> Mobile Device Policy> Account Management Policy> Access Control Policy> Personnel Security Policy> Privacy Policy Saviynt is an amazing place to work. We are a high-growth, Platform as a Service company focused on Identity Authority to power and protect the world at work. You will experience tremendous growth and learning opportunities through challenging yet rewarding work that directly impacts our customers, all within a welcoming and positive work environment. If you're resilient and enjoy working in a dynamic environment you belong with us! Saviynt is an equal opportunity employer and we welcome everyone to our team. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or veteran status.
Posted 1 day ago
0 years
0 Lacs
India
Remote
Role: Oracle Order Management cloud Location: Remote, India Working hours- 2.00 PM to 11.00 PM IST Job Description: 1) Custom node addition and custom DOO (Distributed Order Orchestration) configurations with various pauses/release criteria and compensation rules 2) Configuration of Oracle Extensions 3) Various Pricing algorithms fitting business needs using pricing attributes and various fields in order 4) Configuration of PO mapper wherein SO fields can be taken on PO (example: SO EFF taken as PO price) 5) Configuration of OM to AR mapper 6) Configuration of various external connectors and routings rules 7) Adding custom line status based on custom DOO node. (Distributed Order Orchestration)
Posted 1 day ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: AI/ML Agent Developer Location: All EXL Locations Department: Artificial Intelligence & Data Science Reports To: Director of AI Engineering / Head of Intelligent Automation Position Summary: We are seeking an experienced and innovative AI/ML Agent Developer to design, develop, and deploy intelligent agents within a multi-agent orchestration framework. This role involves building autonomous agents that leverage LLMs, reinforcement learning, prompt engineering, and decision-making strategies to perform complex data and workflow tasks. You’ll work closely with cross-functional teams to operationalize AI across diverse use cases such as annotation, data quality, knowledge graph construction, and enterprise automation. Key Responsibilities: Design and implement modular, reusable AI agents capable of autonomous decision-making using LLMs, APIs, and tools like LangChain, AutoGen, or Semantic Kernel. Engineer prompt strategies for task-specific agent workflows (e.g., document classification, summarization, labeling, sentiment detection). Integrate ML models (NLP, CV, RL) into agent behavior pipelines to support inference, learning, and feedback loops. Contribute to multi-agent orchestration logic including task delegation, tool selection, message passing, and memory/state management. Collaborate with MLOps, data engineering, and product teams to deploy agents at scale in production environments. Develop and maintain agent evaluations, unit tests, and automated quality checks for reliability and interpretability. Monitor and refine agent performance using logging, observability tools, and feedback signals. Required Qualifications: Bachelor’s or Master’s in Computer Science, AI/ML, Data Science, or related field. 3+ years of experience in developing AI/ML systems; 1+ year in agent-based architectures or LLM-enabled automation. Proficiency in Python and ML libraries (PyTorch, TensorFlow, scikit-learn). Experience with LLM frameworks (LangChain, AutoGen, OpenAI, Anthropic, Hugging Face Transformers). Strong grasp of NLP, prompt engineering, reinforcement learning, and decision systems. Knowledge of cloud environments (AWS, Azure, GCP) and CI/CD for AI systems. Preferred Skills: Familiarity with multi-agent frameworks and agent orchestration design patterns. Experience in building autonomous AI applications for data governance, annotation, or knowledge extraction. Background in human-in-the-loop systems, active learning, or interactive AI workflows. Understanding of vector databases (e.g., FAISS, Pinecone) and semantic search. Why Join Us: Work at the forefront of AI orchestration and intelligent agents. Collaborate with a high-performing team driving innovation in enterprise AI platforms. Opportunity to shape the future of AI-based automation in real-world domains like healthcare, finance, and unstructured data.
Posted 1 day ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Why this role matters NRev is in true zero-to-one territory: we’re building an AI-powered Revenue Orchestration platform that lets GTM teams spin up custom agents to automate, enrich, and accelerate every step of the enterprise sales cycle. We have early revenue, rabidly enthusiastic design-partners, and a awesome product. About the role: We are looking for a GTM Engineer who deeply understands LLM's, agentic workflows & modern marketing systems. You'll be responsible for building systems that power our outbound GTM motions from intelligent lead scoring & enrichment to Messaging, campagins & automation pipelines. You will be working at the intersection of engineering, marketing & AI, building scalable & personalized systems that drive revenue. What you will do: Build AI agents that automate prospecting, enrichment, scoring, and outbound across channels (email, LinkedIn, ads). Design workflows to execute GTM campaigns autonomously. Develop and maintain agents for tasks like: Hyper-personalized outreach Campaign planning Market analysis Intent analysis from search, web, and CRM data Optimize token use, context windows, retrieval pipelines (RAG), and prompt engineering. Marketing Infrastructure & Experimentation Set up tracking, attribution, segmentation, and cohort reporting across channels. Run growth experiments using AI agents (e.g., personalized outbound campaigns, landing page testing, etc.). Automate repetitive marketing ops like lead routing, qualification, and CRM hygiene. Collaboration Work closely with marketing, sales, product, and founders to identify GTM bottlenecks and build systems to fix them. Document workflows, train teammates, and improve tooling over time. You'll thrive if you have Strong LLM know-how: You’ve built with any open-source models, and know how to optimize agents, prompts, and workflows. Agentic design experience: You understand how to architect agentic workflows on your own. Marketing understanding: You can speak the language of MQLs, attribution, ICPs, TAM/SAM/SOM, CAC/LTV, and understand what drives B2B GTM. Builder’s mindset: You’re not just integrating tools; you’re creating new systems and ideas to help us scale 10x. Optional but nice: Python, JavaScript, TypeScript, SQL. Why Join Us Join a fast-moving team building AI-powered GTM systems from the ground up. Work closely with founders, sales, and growth — real impact, real ownership. Build automations that replace headcount and unlock new growth levers. Flexible hours, async culture, and a strong bias for action. We're looking for a GTM Engineer who deeply understands LLMs , agentic workflows , and modern marketing systems . You’ll be responsible for building systems that power our outbound GTM motions — from intelligent lead scoring and enrichment to AI-powered messaging, campaigns, and automation pipelines. You’ll work at the intersection of engineering, marketing, and AI , building scalable and personalized systems that drive revenue — not with headcount, but with intelligent automation. About the Role We're looking for a GTM Engineer who deeply understands LLMs , agentic workflows , and modern marketing systems . You’ll be responsible for building systems that power our outbound GTM motions — from intelligent lead scoring and enrichment to AI-powered messaging, campaigns, and automation pipelines. You’ll work at the intersection of engineering, marketing, and AI , building scalable and personalized systems that drive revenue — not with headcount, but with intelligent automation. About the Role About the Role We're looking for a GTM Engineer who deeply understands LLMs , agentic workflows , and modern marketing systems . You’ll be responsible for building systems that power our outbound GTM motions — from intelligent lead scoring and enrichment to AI-powered messaging, campaigns, and automation pipelines. You’ll work at the intersection of engineering, marketing, and AI , building scalable and personalized systems that drive revenue — not with headcount, but with intelligent automation We're looking for a GTM Engineer who deeply understands LLMs , agentic workflows , and modern marketing systems . You’ll be responsible for building systems that power our outbound GTM motions — from intelligent lead scoring and enrichment to AI-powered messaging, campaigns, and automation pipelines. You’ll work at the intersection of engineering, marketing, and AI , building scalable and personalized systems that drive revenue — not with headcount, but with intelligent automation
Posted 1 day ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Gen AI / Python Developer (Contract) Work Location: Pune (Hybrid) Contract Tenure: 12 Months About KX KX software powers the time-aware data-driven decisions that enable fast-moving companies to outpace competitors, realizing the full potential of their AI investments. The KX platform delivers transformational value by addressing data challenges related to completeness, timeliness and efficiency, ensuring companies understand change over time and can achieve faster, more accurate insights at any scale, cost-effectively. KX is essential to the operations of the world's top investment banks, aerospace and defence, high-tech manufacturing, healthcare and life sciences, automotive and fleet telematics organizations. The company has established offices and a robust customer base across North America, Europe, and Asia Pacific. Overview Of The Role KX is hiring a Gen AI / Python Developer to support our Generative AI and cloud-native application initiatives. This is a contract role where you'll contribute to building AI/ML-powered pipelines and infrastructure that drive real-time data intelligence. You'll work closely with global R&D and engineering teams, leveraging Python, containerized microservices, and GenAI frameworks to accelerate innovation. Key Responsibilities Build and support Python-based applications powering AI/ML and real-time data systems Develop and optimize cloud-native solutions for high-performance data workloads Automate deployments using Docker, CI/CD and GitOps practices Contribute to scalable architectures and assist in LLM or GenAI framework integration Skills Python Development: Strong coding skills with libraries for data processing and automation Cloud Engineering: Experience deploying in AWS/GCP/Azure environments DevOps & Containers: Proficient with Docker, CI/CD tools, and Git workflows Data & API Integration: Knowledge of analytics pipelines, REST APIs, and microservices GenAI & LLM Exposure: Familiarity with LangChain, Hugging Face, or similar frameworks Communication: Strong problem-solving and cross-functional collaboration skills Essential Experience 3+ years of Python development experience in cloud environments Strong knowledge of Python libraries, data processing and automation scripting Experience with Docker, CI/CD tools and version control (Git) Exposure to data analytics, container orchestration and API integrations Good communication and problem-solving skills Preferred Qualifications Familiarity with LLMs, NLP pipelines or frameworks like LangChain, Hugging Face Experience with cloud platforms (AWS/GCP/Azure) Understanding of microservices and DevOps principles Why Choose KX Data Driven: We lead with instinct and follow fact. Naturally Curious: We lean in, listen and learn fast. All In: We take ownership, take on challenges and give it our all.
Posted 1 day ago
8.0 - 12.0 years
0 Lacs
Chennai, Tamil Nadu, India
Remote
Greetings from Tata Consultancy Service!!! Job Title: Network Pre-Sales Solution Architect Experience Required: 8-12 years Location: PAN INDIA Must have worked in Network operations and Deployment like Datacenter build, Migration of Network, etc Have hands-on experience on the Network devices such as Routers, Switches, Wireless and Network Authentications, Remote Access VPN, Firewalls, IPS/IDS, Load Balancer, Network Management tools Have experience in designing the Network solutions for new Datacenter build, new office site build Experience in Network solutions (Presales) and have worked on RFP/RFI / proactive engagements Understand different Network vendor products and ability to choose the right match for customer requirement based on technology and cost impact analysis Understand the high-level technical difference between the OEM vendors such as SD-WAN between Viptela, Silver peak and Fortinet Preferrable experience in working with multiple OEM vendors on creating the design, BoM, Cost estimations. Good experience in writing technical solution document for customer submission Have good experience in creating PPT for the customer solution defense Capable to present the technical solution to customer, have fluent communication skills and presentation skills Able to create Pre-Sales solution response in document, PPT and explain clearly to customer on reasons for proposed solution. Have analytical ability skills to understand Customer pulse on requirements, Objectives, expectations and perform Pre-Sales solution with proper business case and justification and winning approach. Candidates with experience preferable on Load balancers, Firewalls, NMS & OEM Native Tools, DDI, Network Automation & Orchestration, Firewall, IPS, IDS , Application Delivery Controller , WAN sizing, SDN, SDWAN SD LAN , Cloud Networking, Network SaaS solutions etc. Candidates who worked on RFX deals for Fortune 500 Global customers and converted that opportunity to positive would be given preference Have strategic decisions making skills Basic knowledge on Cloud Network skills and work with different internal teams like Compute, Workplace, Public Cloud, Private Cloud, Transition team, Security Team to meet the solution RFX requirement. Work closely with Enterprise Solution architect and Sales Customer focus team to understand their objective and to win the Deals. Certifications from leading Networking vendors such as CCNP, Aruba, Juniper, CCIE preferable.
Posted 1 day ago
8.0 - 12.0 years
0 Lacs
Delhi, India
Remote
Greetings from Tata Consultancy Service!!! Job Title: Network Pre-Sales Solution Architect Experience Required: 8-12 years Location: PAN INDIA Must have worked in Network operations and Deployment like Datacenter build, Migration of Network, etc Have hands-on experience on the Network devices such as Routers, Switches, Wireless and Network Authentications, Remote Access VPN, Firewalls, IPS/IDS, Load Balancer, Network Management tools Have experience in designing the Network solutions for new Datacenter build, new office site build Experience in Network solutions (Presales) and have worked on RFP/RFI / proactive engagements Understand different Network vendor products and ability to choose the right match for customer requirement based on technology and cost impact analysis Understand the high-level technical difference between the OEM vendors such as SD-WAN between Viptela, Silver peak and Fortinet Preferrable experience in working with multiple OEM vendors on creating the design, BoM, Cost estimations. Good experience in writing technical solution document for customer submission Have good experience in creating PPT for the customer solution defense Capable to present the technical solution to customer, have fluent communication skills and presentation skills Able to create Pre-Sales solution response in document, PPT and explain clearly to customer on reasons for proposed solution. Have analytical ability skills to understand Customer pulse on requirements, Objectives, expectations and perform Pre-Sales solution with proper business case and justification and winning approach. Candidates with experience preferable on Load balancers, Firewalls, NMS & OEM Native Tools, DDI, Network Automation & Orchestration, Firewall, IPS, IDS , Application Delivery Controller , WAN sizing, SDN, SDWAN SD LAN , Cloud Networking, Network SaaS solutions etc. Candidates who worked on RFX deals for Fortune 500 Global customers and converted that opportunity to positive would be given preference Have strategic decisions making skills Basic knowledge on Cloud Network skills and work with different internal teams like Compute, Workplace, Public Cloud, Private Cloud, Transition team, Security Team to meet the solution RFX requirement. Work closely with Enterprise Solution architect and Sales Customer focus team to understand their objective and to win the Deals. Certifications from leading Networking vendors such as CCNP, Aruba, Juniper, CCIE preferable.
Posted 1 day ago
10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the role We’re looking for Senior Engineering Manager to lead our Data / AI Platform and MLOps teams at slice. In this role, you’ll be responsible for building and scaling a high-performing team that powers data infrastructure, real-time streaming, ML enablement, and data accessibility across the company. You'll partner closely with ML, product, platform, and analytics stakeholders to build robust systems that deliver high-quality, reliable data at scale. You will drive AI initiatives to centrally build AP platform and apps which can be leveraged by various functions like legal, CX, product in a secured manner This is a hands-on leadership role perfect for someone who enjoys solving deep technical problems while growing people and teams. What You Will Do Lead and grow the data platform pod focused on all aspects of data (batch + real-time processing, ML platform, AI tooling, Business reporting, Data products – enabling product experience through data) Maintain hands-on technical leadership - lead by example through code reviews, architecture decisions, and direct technical contribution Partner closely with product and business stakeholders to identify data-driven opportunities and translate business requirements into scalable data solutions Own the technical roadmap for our data platform including infra modernization, performance, scalability, and cost efficiency Drive the development of internal data products like self-serve data access, centralized query layers, and feature stores Build and scale ML infrastructure with MLOps best practices including automated pipelines, model monitoring, and real-time inference systems Lead AI platform development for hosting LLMs, building secure AI applications, and enabling self-service AI capabilities across the organization Implement enterprise AI governance including model security, access controls, and compliance frameworks for internal AI applications Collaborate with engineering leaders across backend, ML, and security to align on long-term data architecture Establish and enforce best practices around data governance, access controls, and data quality Ensure regulatory compliance with GDPR, PCI-DSS, SOX through automated compliance monitoring and secure data pipelines Implement real-time data processing for fraud detection and risk management with end-to-end encryption and audit trails Coach engineers and team leads through regular 1:1s, feedback, and performance conversations What You Will Need 10+ years of engineering experience, including 2+ years managing data or infra teams with proven hands-on technical leadership Strong stakeholder management skills with experience translating business requirements into data solutions and identifying product enhancement opportunities Strong technical background in data platforms, cloud infrastructure (preferably AWS), and distributed systems Experience with tools like Apache Spark, Flink, EMR, Airflow, Trino/Presto, Kafka, and Kubeflow/Ray plus modern stack: dbt, Databricks, Snowflake, Terraform Hands on experience building AI/ML platforms including MLOps tools and experience with LLM hosting, model serving, and secure AI application development Proven experience improving performance, cost, and observability in large-scale data systems Expert-level cloud platform knowledge with container orchestration (Kubernetes, Docker) and Infrastructure-as-Code Experience with real-time streaming architectures (Kafka, Redpanda, Kinesis) Understanding of AI/ML frameworks (TensorFlow, PyTorch), LLM hosting platforms, and secure AI application development patterns Comfort working in fast-paced, product-led environments with ability to balance innovation and regulatory constraints Bonus: Experience with data security and compliance (PII/PCI handling), LLM infrastructure, and fintech regulations Life at slice Life so good, you’d think we’re kidding: Competitive salaries. Period. An extensive medical insurance that looks out for our employees & their dependents. We’ll love you and take care of you, our promise. Flexible working hours. Just don’t call us at 3AM, we like our sleep schedule. Tailored vacation & leave policies so that you enjoy every important moment in your life. A reward system that celebrates hard work and milestones throughout the year. Expect a gift coming your way anytime you kill it here. Learning and upskilling opportunities. Seriously, not kidding. Good food, games, and a cool office to make you feel like home. An environment so good, you’ll forget the term “colleagues can’t be your friends”.
Posted 1 day ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
We are seeking a highly skilled and experienced Team Lead - PHP Laravel Developer to oversee a team of developers, ensure high-quality project delivery, and contribute to building scalable web applications. The ideal candidate should have an in-depth understanding of PHP, Laravel, and modern development practices, with proven leadership experience. Key Skills and Qualifications: Technical Skills: ● PHP Laravel: Expertise in building scalable applications, designing RESTful APIs. ● MySQL: Skilled in database design, optimization, and writing complex SQL queries. ● Microservices Architecture: Hands-on experience with microservices, containerization tools like Docker, and orchestration tools like Kubernetes. ● Git: Proficient in version control, including branching, merging, and repository management. Leadership & Methodologies: ● Proven experience leading and mentoring development teams. ● Strong expertise in Agile development methodologies such as Scrum and Kanban. ● Ability to plan and manage sprints effectively. Soft Skills: ● Strong problem-solving skills with the ability to think critically and creatively. ● Excellent communication and collaboration abilities. ● Strong interpersonal and team management skills. Experience Required : 8- 10 yrs
Posted 1 day ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
DevOps Engineer Talent Worx is looking for a dynamic and skilled DevOps Engineer to join our progressive team. In this role, you will integrate development and operations to optimize and streamline our software delivery processes. Your expertise in automation, system management, and cloud services will contribute significantly to our commitment to delivering high-quality software solutions efficiently and effectively. Requirements Key Responsibilities: Implement and manage CI/CD pipelines to ensure smooth deployment and integration of software updates Automate infrastructure provisioning and configuration management using Infrastructure as Code (IaC) tools Work closely with development teams to refine and optimize build processes, deployment strategies, and collaboration practices Monitor the system's performance, ensuring high availability and reliability of applications and services Manage system security, including setup, deployment, and maintenance of firewall and access control systems Continuously improve system performance through proactive monitoring and maintenance Assist in troubleshooting issues across the development and production environments Document processes, best practices, and configurations for reproducibility and efficiency Required Skills and Qualifications: 5+ years of experience as a DevOps Engineer or in a similar role Proficiency in scripting languages such as Python, Shell, or Bash for automation tasks Hands-on experience with CI/CD tools (e.g., Jenkins, GitHub Actions) and version control systems (e.g., Git) In-depth knowledge of cloud platforms such as AWS, Azure, or Google Cloud Familiarity with container technologies like Docker and orchestration tools like Kubernetes Understanding of databases, both SQL and NoSQL, including implementation and management Strong analytical and troubleshooting skills, with a keen focus on operational excellence Exceptional communication skills and the ability to work collaboratively in a team environment Preferred Skills: Knowledge of microservices architecture and serverless architectures Experience in implementing automated security practices in CI/CD pipelines Familiarity with Agile methodologies and frameworks Education: Bachelor's degree in Computer Science, Engineering, or a related field Benefits Talworx is an emerging recruitment consulting and services firm, we are hiring for our Product based health care client which is a leading precision medicine company focused on guarding wellness and giving every person more time free from cancer. Founded in 2012, we're transforming patient care by providing critical insights into what drives disease through its advanced blood and tissue tests, real-world data and AI analytics.
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough