Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
4 - 8 Lacs
Hyderābād
On-site
Position Title: Lead Product Security Engineer Reports To: Principal Security Architect As our Lead Product Security Engineer you’ll own threat modeling, secure‑by‑design guidance, and hands‑on engineering for an industry‑leading SaaS platform that powers automotive retail for millions of users. You’ll work autonomously, partner closely with our Application Security (AppSec) scanning team, and influence product teams across the company—from design through incident response. Working hours: Late‑shift schedule with ~4 hours daily overlap with US Mountain Time (e.g., 1 p.m. – 10 p.m. IST). Some flexibility is expected; we value outcomes over clock‑watching. Key Responsibilities : 1. Leadership & Strategy: Champion security culture and coach teams on secure product design Lead the development and implementation of CDK’s product security strategy Design and implement technology and processes supporting CDK’s product security strategy Effectively partner across security, technology, and business teams Provide technical security leadership to product teams Develop effective product security metrics and use them to drive improvements 2. Product Security Standards: Guide the development and continuous improvement of product security standards and guidelines in alignment with risk and compliance requirements Drive accurate measurement and reporting of CDK’s compliance with product security standards Drive adoption of product security standards across product, technology, and infrastructure teams 3. Product Security Architecture and Engineering: Lead and evolve product threat‑modeling practices (STRIDE, PASTA, attack trees, etc.) Guide development of secure product architecture practices across technology teams Develop repeatable engineering and automation patterns to enable “secure by default” design Solve challenging product and application security problems 4. Security Operations: Work with CDK Security Operations team to identify and enable detection for advanced application security problems Drive good development practices in orchestration and automation of macro response workflows Be a force multiplier in rare product security incident scenarios 5. Data-Driven Security: Help wrangle and correlate security data from multiple tools; prototype metrics, dashboards, or ML models that reveal real risk trends. Advise on data quality, cleansing, and correlation strategies. Required Qualifications: Education: Bachelor’s degree in Computer Science or Information Security , or an equivalent experience Experience: 8+ years overall in software / security engineering, including 5+ years focused on product or application security in complex SaaS or e‑commerce environments. Demonstrated ownership of threat modeling for modern cloud architectures (microservices, serverless, containers). Proven ability to drive security architecture and standards autonomously. Hands‑on experience with at least one major public cloud and IaC (Terraform, CloudFormation, ARM, etc.). Excellent written and verbal communication skills; able to translate deep technical issues into business‑focused recommendations. Nice‑to‑have: Prior work with data‑privacy or data‑protection regulations (GDPR, CCPA, DPDP India, etc.). Data science / analytics chops: experience cleaning, correlating, or modeling large security datasets. Strong software‑engineering background, especially in Python (automation, data pipelines, small tools). Familiarity with secure SDLC and AppSec scanning pipelines (SAST, DAST, SCA, container security). Experience mentoring or leading distributed teams. Why join us? Impact at scale – Your work secures a platform that processes billions of dollars in automotive transactions yearly. Autonomy & ownership – We hire experts and trust them to deliver. Global collaboration – Work with top engineers across India and North America, shaping security practices company‑wide. Growth – Influence adjacent initiatives in data security, metrics, and architecture alongside our Principal Security Architect. At CDK, we believe inclusion and diversity are essential in inspiring meaningful connections to our people, customers and communities. We are open, curious and encourage different views, so that everyone can be their best selves and make an impact. CDK is an Equal Opportunity Employer committed to creating an inclusive workforce where everyone is valued. Qualified applicants will receive consideration for employment without regard to race, color, creed, ancestry, national origin, gender, sexual orientation, gender identity, gender expression, marital status, creed or religion, age, disability (including pregnancy), results of genetic testing, service in the military, veteran status or any other category protected by law. Applicants for employment in the US must be authorized to work in the US. CDK may offer employer visa sponsorship to applicants.
Posted 4 days ago
0 years
0 Lacs
India
On-site
Goodyear. More Driven. Duties and Responsibilities: Develop and support data-driven applications with integrated front-end and back-end services. Create responsive web and mobile interfaces using Python, JavaScript, HTML5/CSS3, and modern frameworks. Build efficient back-end services, APIs, and server-side components using Python etc. Design and maintain SQL/NoSQL databases for secure, high-performance data management. Implement CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes) for deployment. Collaborate with cross-functional teams, data scientists, and subject matter experts to align solutions with business goals. Learn tire industry processes and apply them to technical development. Requirements: Significant experience in front-end development using modern frameworks and languages such as React, Angular, JavaScript/TypeScript, HTML5, and CSS3 Significant experience in server-side development using Python Strong understanding of RESTful API design, microservices architecture, and service-oriented design Understanding of modern cloud platforms such as AWS, Azure, or GCP, particularly as they relate to front-end deployment and performance Experience visualizing data sourced from relational or NoSQL databases (e.g., PostgreSQL, MongoDB, DynamoDB) Ability to translate feature requirements and technical design into a working implementation Good teamwork skills - ability to work in a team environment and deliver results on time. Strong communication skills - capable of conveying information concisely to diverse audiences. Application of software design skills and methodologies (algorithms, design patterns, performance optimization, responsive design and testing) and modern DevOps practices (GitHub Actions, CI/CD) Goodyear is an Equal Employment Opportunity and Affirmative Action Employer. All qualified applicants will receive consideration for employment without regard to that individual's race, color, religion or creed, national origin or ancestry, sex (including pregnancy), sexual orientation, gender identity, age, physical or mental disability, ethnicity, citizenship, or any other characteristic protected by law. Goodyear is one of the world’s largest tire companies. It employs about 68,000 people and manufactures its products in 53 facilities in 20 countries around the world. Its two Innovation Centers in Akron, Ohio and Colmar-Berg, Luxembourg strive to develop state-of-the-art products and services that set the technology and performance standard for the industry. For more information about Goodyear and its products, go to www.goodyear.com/corporate
Posted 4 days ago
14.0 years
25 - 30 Lacs
India
On-site
Required Skills & Qualifications Technical Expertise: ServiceNow Security: Deep understanding of SecOps, GRC, RBAC, ACLs, and platform security best practices. Cybersecurity & Compliance: Strong knowledge of security frameworks (NIST, ISO 27001, CIS), regulatory compliance, and risk management. Integration & Development: Experience with REST APIs, JavaScript, OAuth, and secure integration practices. Cloud Security: Understanding of SaaS security, encryption methods, and cloud-based security models. 14-18 years of IT security experience, with 14+ years in ServiceNow security architecture, administration, or operations. Hands-on experience in security automation, incident response, and risk management using ServiceNow. Prior experience working with cybersecurity, risk management, and IT governance teams.*Description for Internal Candidates Key Responsibilities Security Strategy & Compliance Define and enforce compliance to security policies, standards, and best practices for the ServiceNow platform in alignment with ServiceNow recommended Platform security shared responsibility model. Ensure service now platform is compliant with internal and external infosec requirements and industry best practices Establish governance frameworks for secure development, data protection, and risk mitigation. Access Control, Authentication, and authorization -Design and manage role-based access control (RBAC), ACLs, and authentication mechanisms in ServiceNow. Responsible for Single Sign-On (SSO), Multi-Factor Authentication (MFA), and enterprise IAM solutions based on Infosec standard Regular review of access control & entitlement based on the job function and refinement using the principle of least privilege, Security Operations & Incident Management Oversee the implementation and optimization of ServiceNow Security Operations (SecOps), including: Security Incident Response (SIR) – streamline incident detection, triage, and resolution. Vulnerability Response (VR) – automate vulnerability identification and remediation workflows. Threat Intelligence – integrate threat feeds and security insights for proactive defense. Coordinate with cybersecurity teams to detect, investigate, and respond to threats affecting ServiceNow. Data Privacy, Security & Encryption Defining Service Now data classification, data retention & data discovery strategy in alignment with Ameriprise data management policies /standards Implement data encryption strategy at rest, in transit & encryption key management Determining the data collection, storage, usage, sharing, archiving, and destruction policy of data processed in ServiceNow instances. Monitor access patterns and system activity to identify potential security threats. Secure Integrations & Automation Design and enforce secure API management for integrations between ServiceNow and third-party security tools (e.g., Active Directory, CyberArk and Aveksa, Azure AD, RIM, IAM). Leverage IntegrationHub, Automation Engine, and Orchestration to streamline security workflows. Ensure secure data exchange and prevent unauthorized access to ServiceNow instances. Risk & Compliance Management Deploy and manage ServiceNow Governance, Risk, and Compliance (GRC) solutions to assess security risks. Participate regular security audits, risk assessments, and penetration tests on the ServiceNow platform. Define and implement security controls to mitigate risks and enhance compliance. Job Type: Full-time Pay: ₹2,500,000.00 - ₹3,000,000.00 per year Application Question(s): Please brief your expertise in ServiceNow SecOps? Need at least 12+ years of expertise in Service Now? Work Location: In person Expected Start Date: 04/08/2025
Posted 4 days ago
7.0 years
0 Lacs
Hyderābād
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking an experienced Lead Platform Engineer to join our Automation Engineering team. The ideal candidate will excel in cloud infrastructure automation, generative AI, and machine learning, with a strong foundation in DevOps practices and modern scripting tools. This role involves designing cutting-edge AI-driven solutions for AIOps while innovating cloud automation processes to optimize operational efficiency. Responsibilities Design and develop automated workflows for cloud infrastructure provisioning using IaC tools like Terraform Build frameworks to support deployment, configuration, and management across diverse cloud environments Develop and manage service catalog components, ensuring integration with platforms like Backstage Implement GenAI models to enhance service catalog functionality and code quality across automation pipelines Design and implement CI/CD pipelines and maintain CI pipeline code for cloud automation use cases Write scripts to support cloud deployment orchestration using Python, Bash, or other scripting languages Design and deploy generative AI models for AIOps applications such as anomaly detection and predictive maintenance Work with frameworks like LangChain or cloud platforms such as Bedrock, Vertex AI, and Azure AI to deploy RAG workflows Build and optimize vector databases and document sources using tools like OpenSearch, Amazon Kendra, or equivalent solutions Prepare and label data for generative AI models, ensuring scalability and integrity Create agentic workflows using frameworks like Langraph or cloud GenAI platforms such as Bedrock Agents Integrate generative AI models with operational systems and AIOps platforms for enhanced automation Evaluate AI model performance and ensure continuous optimization over time Develop and maintain MLOps pipelines to monitor and mitigate model decay Collaborate with cross-functional teams to drive innovation and improve cloud automation processes Research and recommend new tools and best practices to enhance operational efficiency Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related field 7+ years of experience in cloud infrastructure automation, scripting, and DevOps Strong proficiency in IaC tools like Terraform, CloudFormation, or similar Expertise in Python, cloud AI frameworks such as LangChain, and generative AI workflows Demonstrated background in developing and deploying AI models such as RAG or transformers Proficiency in building vector databases and document sources using solutions like OpenSearch or Amazon Kendra Competency in preparing and labeling datasets for AI models and optimizing data inputs Familiarity with cloud platforms including AWS, Google Cloud, or Azure Capability to implement MLOps pipelines and monitor AI system performance Nice to have Knowledge of agentic architectures such as React and flow engineering techniques Background in using Bedrock Agents or Langraph for workflow creation Understanding of integrating generative AI into legacy or complex operational systems We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 4 days ago
3.0 - 8.0 years
0 Lacs
Hyderābād
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Build scalable frameworks to manage infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create service catalog components compatible with automation platforms like Backstage Integrate generative AI models to improve service catalog functionalities, including automated code generation and validation Architect CI/CD pipelines for automated build, test, and deployment processes Maintain deployment automation scripts utilizing technologies such as Python or Bash Implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Employ AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines to stream real-time operational insights that support AI-driven automation Build MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Select appropriate LLM models for specific AIOps use cases and integrate them effectively into workflows Collaborate with cross-functional teams to design and refine automation and AI-driven processes Research emerging tools and technologies to enhance operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven track record in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Background in Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of comprehensive AIOps processes to refine cloud-based automation solutions We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 4 days ago
4.0 - 8.0 years
0 Lacs
Hyderābād
On-site
Are you seeking an environment where you can drive innovation? Does the prospect of working with top engineering talent get you charged up? Apple is a place where extraordinary people gather to do their best work. Together we create products and experiences people once couldn’t have imagined - and now can’t imagine living without. Apple’s IS&T manages key infrastructure at Apple - how online orders are placed, the customer experience with technology in our retail stores, how much network capacity we need around the world and much more. The SAP Global Systems team within IS&T runs the Operations and Financial transactional platform that powers all of Apple functions like Sales, Manufacturing, Distribution and Financials. Think platform-as-product! Our team delivers great developer experiences to our Program, Project and Development teams through curated set of tools, capabilities and processes offered through our Internal Developer Platform. We automate infrastructure operations, support complex service abstractions, build flexible workflows and curate a frictionless ecosystem that enables end-to-end collaboration to help drive productivity and engineering velocity. This is a tremendous opportunity for someone who has the skill to own initiatives and a passion to work in a highly coordinated global solution platform! Join us in crafting solutions that do not yet exist! Description As a Cloud Platform Engineer at Apple, you will be a key contributor to the design, development, and operation of our next-generation cloud platform. You will work alongside a team of dedicated engineers to build a highly scalable, reliable, and secure platform that empowers Apple's product teams to deliver extraordinary experiences. You will be responsible for driving innovation, adopting new technologies, and ensuring the platform meets the evolving needs of Apple's business. RESPONSIBILITIES: - Architect, Design and implement robust cloud native solutions. - Implement the API Led and Event driven Solutions across SAP and Non SAP Cloud Platform. - Implement and design standard processes for security concepts that are critical for cloud native applications. - Have hands-on understanding of containerization and orchestration concepts for designing and building scalable and resilient modern event and micro-services based systems - Collaborate with multi-functional teams to design and implement secure and robust application architectures for performance, scalability, and cost-efficiency. - Understands and uses monitoring, logging, and alerting solutions to continuously assess and improve system reliability, and performance - Have passion to drive automation to streamline manual processes and enhance productivity across the organization. - Stay up-to-date with emerging technologies, industry trends, and standard processes in DevOps and cloud computing. Minimum Qualifications 4 - 8 years of Experience in the relevant field. Bachelor's degree or equivalent experience in Computer Science, Engineering or other relevant major. Knowledge of working with public cloud providers such as AWS or GCP Understanding of networking concepts on Cloud, like VPCs/Subnets, Firewalls and Load Balancers. Experience in CI/CD and configuration management systems Familiarity with Kubernetes or Kyma Runtime. Understanding of cloud security principles Preferred Qualifications Strong expertise on Cloud native applications. A strong sense of ownership. Good critical thinking & interpersonal skills to work successfully across diverse business and technical & multi-functional teams. Understanding of SAP BTP. Understand complex landscape architectures. Have working knowledge of on-prem and cloud based hybrid architectures and infrastructure concepts of Regions, Availability Zones, VPCs/Subnets, Load Balancers, API Gateways etc. Strong understanding of common authentication schemes, certificates, secrets and protocols. Experience on IAC like Terraform / CloudFormation. Scripting and/or coding skills needed for automation, triaging and troubleshooting . Experience on any of these scripting Python, Go, Java etc. Certifications like AWS Solutions Architect, DevOps Professional, GCP Professional Architect, SAP BTP Certification is a plus. Submit CV
Posted 4 days ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description And Requirements Position Summary This position is responsible for design and implementation of application platform solutions, with an initial focus on Enterprise Content Management (ECM) platforms such as enterprise search and document generation/workflow products such as IBM FileNet / BAW, WebSphere Application Server (WAS), and technologies from OpenText. While gaining and providing expertise on these key business platforms, the Engineer will identify opportunities for automation and cloud-enablement across other technologies within the Platform Engineering portfolio and developing cross-functional expertise Job Responsibilities Provide design and technical support to application developers and operations support staff when required. This includes promoting the use of best practices, ensuring standardization across applications and troubleshooting Design and implement complex integration solutions through collaboration with engineers and application teams across the global enterprise Promote and utilize automation to design and support configuration management, orchestration, and maintenance of the integration platforms using tools such as Perl, Python, and Unix shell Collaborate with senior engineers to understand emerging technologies and their effect on unit cost and service delivery as part of the evolution of the integration technology roadmap Investigate, recommend, implement, and maintain ECM solutions across multiple technologies Investigation of released fix packs, provide well documented instructions and script automation to operations for implementation in collaboration with Senior Engineers in support of platform currency Capacity reviews of current platform Participate in cross-departmental efforts Leads initiatives within the community of practice Willing to work in rotational shifts Good Communication skill with the ability to communicate clearly and effectively Knowledge, Skills And Abilities Education Bachelor’s Degree in Computer Science, Information Systems, or related field. Experience 7+ years of total experience and at least 4+ years of experience in design and implementation of application platform solutions on Enterprise Content Management (ECM) platforms such as enterprise search, document generation/workflow products such as IBM FileNet / BAW, WebSphere Application Server (WAS) Promote and utilize automation to design and support configuration management, orchestration, and maintenance of the integration platforms using tools such as Perl, Python, and Unix shell Apache / HIS Linux/Windows OS Communication Json/Yaml Shell scripting Integration of authentication and authorization methods Web to jvm communications SSL/TLS protocols/cipher suites and certificates/keystores FileNet/BAW install, configure, administer Liberty administration Troubleshooting Integration with database technologies Integration with middleware technologies Good to Have: Ansible Python OpenShift AZDO Pipelines Other Requirements (licenses, Certifications, Specialized Training – If Required) Working Relationships Internal Contacts (and purpose of relationship): MetLife internal partners External Contacts (and purpose of relationship) – If Applicable MetLife external partners About MetLife Recognized on Fortune magazine's list of the 2025 "World's Most Admired Companies" and Fortune World’s 25 Best Workplaces™ for 2024, MetLife , through its subsidiaries and affiliates, is one of the world’s leading financial services companies; providing insurance, annuities, employee benefits and asset management to individual and institutional customers. With operations in more than 40 markets, we hold leading positions in the United States, Latin America, Asia, Europe, and the Middle East. Our purpose is simple - to help our colleagues, customers, communities, and the world at large create a more confident future. United by purpose and guided by empathy, we’re inspired to transform the next century in financial services. At MetLife, it’s #AllTogetherPossible . Join us!
Posted 4 days ago
8.0 years
0 Lacs
Hyderābād
Remote
Company Description It all started in sunny San Diego, California in 2004 when a visionary engineer, Fred Luddy, saw the potential to transform how we work. Fast forward to today — ServiceNow stands as a global market leader, bringing innovative AI-enhanced technology to over 8,100 customers, including 85% of the Fortune 500®. Our intelligent cloud-based platform seamlessly connects people, systems, and processes to empower organizations to find smarter, faster, and better ways to work. But this is just the beginning of our journey. Join us as we pursue our purpose to make the world work better for everyone. Job Description We are seeking a highly skilled and experienced Senior CyberArk PAM Engineer to serve as an engineering lead for the design, architecture, and implementation of our CyberArk Privileged Access Management (PAM) solution. This critical role will drive our corporate configuration efforts, ensuring the PAM platform aligns with our Zero Trust principles, multi-cloud strategy (AWS, Azure, GCP, private cloud), and deep integration with ServiceNow as the primary gateway for all privileged access. This individual will be instrumental in fortifying our cybersecurity posture and significantly reducing organizational risk. Key Responsibilities: PAM Component Design & Architecture: Lead the end-to-end technical design and architecture of the CyberArk PAM solution, including Privilege Cloud, Secure Cloud Access (SCA), and its integration points within our complex multi-cloud and hybrid environments. PAM Policy Definition: Develop, implement, and continuously refine granular PAM policies, including the Master Policy, for Zero Standing Privileges (ZSP) and dynamic, risk-based Just-in-Time (JIT) access for all human and machine identities. This includes establishing specific policies for Windows Servers, Linux Servers, macOS/Endpoints (via EPM), and cloud platforms. PAM Connector Deployment & Management: Design, deploy, configure, and secure all customer-hosted CyberArk connector servers (Windows and Linux) across our on-premises and multi-cloud environments, ensuring robust connectivity and performance. Application Integrations (Lead): Lead the technical design and initial setup of all critical CyberArk integrations, with a strong emphasis on: ServiceNow Integration: Configure CyberArk for seamless integration with ServiceNow MID Servers for secure credential retrieval by ServiceNow applications (Discovery, Orchestration) and for facilitating JIT access workflows initiated from ServiceNow. Secrets Hub Integration: Ensure proper integration with CyberArk Secrets Hub for centralized management of application secrets. SIEM (Splunk) Integration: Design and implement the integration of CyberArk security events and audit data with our Splunk SIEM for centralized monitoring, correlation, and enhanced threat detection. Service Health & Monitoring Development: Develop and implement comprehensive service health and monitoring solutions for the CyberArk platform and its connectors, integrating alerts and metrics into ServiceNow's ITSM and SecOps modules for centralized visibility and automated incident creation. End-User / Admin Workflow Development: Lead the design and development of ServiceNow-driven privileged access workflows, ensuring they are intuitive, efficient, and align with our security objectives. Audit & Compliance Configuration: Configure CyberArk's unified audit capabilities, session recording, and reporting to meet stringent audit and regulatory requirements, ensuring data is accessible and correlated within ServiceNow for streamlined compliance reporting. AI Integration: Explore and integrate CyberArk CORA AI features for AI-driven policy recommendations and enhanced security intelligence. Qualifications To be successful in this role you have: Experience in leveraging or critically thinking about how to integrate AI into work processes, decision-making, or problem-solving. This may include using AI-powered tools, automating workflows, analyzing AI-driven insights, or exploring AI's potential impact on the function or industry. Bachelor's degree in Computer Science, Information Technology, Cybersecurity, or a related engineering discipline. 8+ years of progressive experience in Identity and Access Management (IAM) and cybersecurity, with at least 4-5 years of hands-on experience specifically with CyberArk PAM solutions (Privilege Cloud, PSM, CPM, EPM). Technical Skills: Deep expertise in CyberArk PAM suite architecture, deployment, and administration. Strong understanding of Zero Trust principles, Least Privilege, ZSP, and JIT access methodologies. A passion for security Proven experience with cloud platforms (AWS, Azure, GCP) and securing cloud identities/workloads. Hands-on experience with ServiceNow integrations (ITSM, SecOps, CMDB, workflows, MID Servers). Experience with SIEM integration (Splunk preferred) and security event correlation. Familiarity with scripting (e.g., PowerShell, Python) for automation and API integrations. Knowledge of Windows, Linux, and macOS operating systems for endpoint and server policy configuration. Knowledge of ServiceNow platform development or ServiceNow studio would be beneficial Ability to work collaboratively in a highly distributed team. Ability to communicate technical concepts to business stakeholders. A passion for security. Ability to work in team-oriented environment where collaboration is encouraged and valued Ability to demonstrate self-direction, self-learning, and independent problem solving Ability to combine data and knowledge from various sources to determine the optimal approach for planning and execution Soft Skills: Excellent analytical, problem-solving, and communication skills. Ability to consult with various stakeholders, lead technical discussions, and mentor junior team members. Additional Information Work Personas We approach our distributed world of work with flexibility and trust. Work personas (flexible, remote, or required in office) are categories that are assigned to ServiceNow employees depending on the nature of their work and their assigned work location. Learn more here. To determine eligibility for a work persona, ServiceNow may confirm the distance between your primary residence and the closest ServiceNow office using a third-party service. Equal Opportunity Employer ServiceNow is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, national origin or nationality, ancestry, age, disability, gender identity or expression, marital status, veteran status, or any other category protected by law. In addition, all qualified applicants with arrest or conviction records will be considered for employment in accordance with legal requirements. Accommodations We strive to create an accessible and inclusive experience for all candidates. If you require a reasonable accommodation to complete any part of the application process, or are unable to use this online application and need an alternative method to apply, please contact globaltalentss@servicenow.com for assistance. Export Control Regulations For positions requiring access to controlled technology subject to export control regulations, including the U.S. Export Administration Regulations (EAR), ServiceNow may be required to obtain export control approval from government authorities for certain individuals. All employment is contingent upon ServiceNow obtaining any export license or other approval that may be required by relevant export control authorities. From Fortune. ©2025 Fortune Media IP Limited. All rights reserved. Used under license.
Posted 4 days ago
8.0 years
7 - 10 Lacs
Hyderābād
On-site
Job Description Summary You will be part of the core team at GE Vernova Grid Software Business Driving Energy Transition for the planet by designing, building and delivering software applications & services for next generation Grid Software that orchestrates the 40% of the world’s power today. You will work with a global team to Implement WAMS to customers. You will be part of scrum team and be responsible for requirement analysis, software customization, integration, testing and documentation. Job Description Roles and Responsibilities In this role, you will: Work in a scrum team to implement WAMS capability for electricity flow orchestration for GE’s customers. Provide technical leadership on Java-Spring boot and related technologies on cloud and to develop this next gen capabilities. Proven expertise in frontend development using Angular (version 8+ preferred). Work on AWS and Kubernetes technologies to deploy these next gen capabilities. Experience managing Kubernetes deployments using Helm charts Experience with unit testing, mocking frameworks, and test containers Strong experience with message streaming and event-driven systems using Kafka. Experience with PostgreSQL or similar relational databases Proficiency with version control and collaborative development using GitHub Familiarity with basic Linux commands for system operations. Familiarity with managing build processes using Maven. Exposure to CI/CD pipelines and project management with Azure DevOps. Familiarity with Jenkins for automating builds and deployments Experience with AWS or similar cloud platforms Apply principles of SDLC and methodologies like Lean/Agile/XP, CI, Software and Product Security, Scalability, Documentation Practices, refactoring and Testing Techniques Understand customer requirements if interfaces and existing product features and develop customization to address desired functionality using the technology selected for the project Understand performance parameters and assess application performance Work on core data structures and algorithms and implement them using language of choice Working with global team of experts, build the local team’s expertise and create higher positive impact on the services project implementations. Proactively share information across the team, to the right audience with the appropriate level of detail and timeliness Required Qualifications: Position requires a master’s degree, or foreign degree equivalent, in Electrical Engineering + 8 years of experience in a related power systems occupation. The position also requires: 5 years of experience in delivering software projects/services; 2 years of experience in leading project deliveries in Wide Area Measurement System (WAMS) applications; 3 years of experience with electric utility industry practices; 3 years of experience with grid stability; 3 years of experience with Phasor Measurement Units; 3 years of experience with power systems with a focus on WAMS applications like Phasor analytics or Linear State Estimator. Advanced experience with micro-services architecture and web services (REST, SOAP) Advanced experience with containerization technologies such as Docker, Kubernetes, and Helm Experience with web development using JavaScript, ideally TypeScript and Angular Experience/Strong understand of designing web applications in distributed architectures Proficiency and understanding in key algorithms and data structures, Proficiency working in both Windows/Linux environment Working knowledge in databases, SQL preferably. Hands-on experience in writing Unit test automation Hands on experience in a scripting language Python or/And PowerShell and package manager like Conan, Source Control Tools like Git Hands-on experience in Microservices & AWS (Added advantage) Familiarity with CI/CD and Azure DevOps pipelines Ready to work in existing product, strong troubleshooting/debugging skill, resolving complex technical & customer feedback Experience working in Power Grid Domain in products like AEMS is a plus. Business Acumen: Has the ability to break down problems and estimate time for custom development tasks. Understands the technology landscape, up to date on current technology trends and new technology, brings new ideas to the team. Displays understanding of the project’s value proposition for the customer. Shows commitment to deliver the best value proposition for the targeted customer. Learns organization vision statement and decision-making framework. Able to understand how team and personal goals/objectives contribute to the organization vision. Personal/Leadership Attributes: Voices opinions and presents clear rationale. Uses data or factual evidence to influence. Completes assigned tasks on time and with high quality. Takes independent responsibility for assigned deliverables. Has the ability to break down problems and estimate time for development tasks. Seeks to understand problems thoroughly before implementing solutions. Asks questions to clarify requirements when ambiguities are present. Identifies opportunities for innovation and offers new ideas. Adapts to new environments and changing requirements. Pivots quickly as needed. When coached, responds to need & seeks info from other sources. Write code that meets standards and delivers desired functionality using the technology selected for the project. Strong oral and written communication skills Effective team building and problem-solving abilities Persists to completion, especially in the face of overwhelming odds and setbacks. Pushes self for results; pushes others for results through team spirit Additional Information Relocation Assistance Provided: Yes
Posted 4 days ago
6.0 years
29 Lacs
Hyderābād
On-site
Requirement: 1. Cloud: (Mandatory): Proven technical experience with AWS or Azure, scripting, Migration and automation Hands-on knowledge of services and implementation such as Landing Zone, Centralized Networking (AWS Transit Gateway / Azure Virtual WAN), Serverless (AWS Lambda / Azure Functions), EC2 / Virtual Machines, S3 / Blob Storage, VPC / Virtual Network, IAM, SCP/Azure Policies, Monitoring(CloudWatch / Azure Monitor), SecOps, FinOps, etc. Experience with migration strategies and tools such as AWS MGN, Database Migration Services, Azure Migrate. Experience in scripting languages such as Python, Bash, Ruby, Groovy, Java, JavaScript 2. Automation (Mandatory): Hands-on experience with Infrastructure as Code automation (IaC) and Configuration Management tools such as: Terraform, CloudFormation, Azure ARM, Bicep, Ansible, Chef, or Puppet 3. CI/CD (Mandatory): Hands-on experience in setting up or developing CI/CD pipelines using any of the tools such as (Not Limited To): GitHub Actions, GitLab CI, Azure DevOps, Jenkins, AWS CodePipeline 4. Containers & Orchestration (Good to have): Hands-on experience in provisioning and managing containers and orchestration solutions such as: Docker & Docker Swarm Kubernetes (Private\Public Cloud platforms) OpenShift Helm Charts Certification Expectations 1. Cloud: Certification (Mandatory, any of): AWS Certified SysOps Administrator – Associate AWS Certified Solutions Architect – Associate AWS Certified Developer – Associate Any AWS Professional/Specialty certification(s) 2. Automation: (Optional, any of): RedHat Certified Specialist in Ansible Automation HashiCorp Terraform Certified Associate 3. CI-CD: (Optional) GitLab Certified CI/CD Associate GitHub Actions Certification 4. Containers & Orchestration (Optional, any of): CKA (Certified Kubernetes Administrator) RedHat Certified Specialist in OpenShift Administration Responsibilities: Lead architecture and design discussions with architects and clients. Understanding of technology best practices and AWS frameworks such as “Well- Architected Framework” Implementing solutions with an emphasis on Cloud Security, Cost Optimization, and automation Manage customer engagement and Lead teams to deliver high-quality solutions on time Identify work opportunities and collaborate with leadership to grow accounts Own project delivery to ensure successful outcomes and positive customer experiences. Ability to initiate proactive meetings with Leads and extended teams to highlight any gaps/delays or other challenges. Subject Matter Expert in technology. Ability to train\mentor the team in functional and technical skills. Ability to decide and provide adequate help on the career progression of people. Support to the application team – Work with application development teams to design, implement and where necessary, automate infrastructure on cloud platforms Continuous improvement - Certain engagements will require you to support and maintain existing cloud environments with an emphasis on continuously innovating through automation and enhancing stability/availability through monitoring and improving the security posture Drive internal practice development initiatives to promote growth and innovation within the team. Contribute to internal assets such as technical documentation, blogs, and reusable code components. Job Types: Full-time, Permanent Pay: Up to ₹2,900,000.00 per year Experience: total: 6 years (Required) Work Location: In person
Posted 4 days ago
5.0 years
2 - 8 Lacs
Hyderābād
On-site
Job Description: Key job responsibilities Develop and enhance Salesforce applications using Apex and Lightning. Work with the Lightning Framework, building dynamic components and leveraging the Salesforce Lightning Design System. Design and implement Sales and Service Cloud solutions within an Agile development environment. Drive custom development and integration across Salesforce and third-party platforms. Build scalable solutions using LWCs, Apex, Web Services, and APIs. Collaborate with cross-functional Scrum teams to enhance and maintain Salesforce applications. Diagnose, track, and manage quality issues to resolution Troubleshoot and resolve production issues, ensuring seamless user experiences. Proficient in object-oriented design, knowledge of design patterns/ trigger/ integration framework Research and apply Salesforce best practices to enhance system performance and scalability. Custom User Interface development, including Lightning pages and Lightning Web Components Support a detailed functional design document to match business requirements Collaborate with a team of Architects, Developers, and Engineers to determine the most appropriate technical strategy and designs to meet business needs Contribute to technical discussions, influencing smart decisions around configuration vs. custom development. Basic Qualifications 5 years of professional experience administering and developing SAAS applications Strong grasp of Salesforce Service and Sales Cloud e.g. products, opportunities, leads, cases, omni-channel routing, entitlements, reporting, security/sharing, automation, and importing data Experience in JSON, REST API's, Web Services, Platform Events At least 3 years working with the Salesforce Sales and Service Cloud in a developer role Strong proficiency in Lightning Web Components (LWC) and Aura. Good understanding of Salesforce DX. Bachelor's degree in computer science or related field Salesforce Certified Administrator and Platform Developer 1, Platform Developer 2 preferred Experience working with CI/CD pipelines and automated deployment processes. Excellent verbal and written communication skills with the ability to engage confidently with leads, business users, and cross-functional teams. Strong analytical and problem-solving skills with a passion for learning and exploring new technologies. Ability to work independently, take initiative, and be highly proactive in a fast-paced environment. Experience using Source Control Management systems (ex. Git, Bitbucket) Experience working with Git and GitHub Experience working with Visual Studio Code or any equivalent IDE. Experience with Salesforce data tools (Data Loader/ Jitterbit/any other ETL tool) Preferred Qualifications Experience working with distributed systems at scale. A passion for solving complex technical challenges and optimizing performance. Hands-on experience in automating, deploying, and supporting large-scale infrastructure. Experience exploring and integrating AI/ML tools and technologies to add value to business solutions. Knowledge of DevOps tools and practices including CI/CD, container orchestration, and cloud platforms (AWS, Azure, GCP). Experience in the telecom industry or familiarity with telecom business processes. Deliverables & Results Should deliver fully functional business logic with all the best practices incorporated Ensure the existing functionalities are intact and there is no impact on them due to the new implementation. Have all the new triggers implemented using the identified trigger framework. Should deliver effective unit test cases with 85% coverage and should cover positive, negative, boundary, and load testing scenarios. Document all the key designs and write the technical design document for the new implementation. Deploy the new functionalities to higher sandbox environments. Weekly Hours: 40 Time Type: Regular Location: Hyderabad, Andhra Pradesh, India It is the policy of AT&T to provide equal employment opportunity (EEO) to all persons regardless of age, color, national origin, citizenship status, physical or mental disability, race, religion, creed, gender, sex, sexual orientation, gender identity and/or expression, genetic information, marital status, status with regard to public assistance, veteran status, or any other characteristic protected by federal, state or local law. In addition, AT&T will provide reasonable accommodations for qualified individuals with disabilities. AT&T is a fair chance employer and does not initiate a background check until an offer is made.
Posted 4 days ago
5.0 - 9.0 years
7 - 8 Lacs
Hyderābād
On-site
Join Amgen’s Mission of Serving Patients At Amgen, if you feel like you’re part of something bigger, it’s because you are. Our shared mission—to serve patients living with serious illnesses—drives all that we do. Since 1980, we’ve helped pioneer the world of biotech in our fight against the world’s toughest diseases. With our focus on four therapeutic areas –Oncology, Inflammation, General Medicine, and Rare Disease– we reach millions of patients each year. As a member of the Amgen team, you’ll help make a lasting impact on the lives of patients as we research, manufacture, and deliver innovative medicines to help people live longer, fuller happier lives. Our award-winning culture is collaborative, innovative, and science based. If you have a passion for challenges and the opportunities that lay within them, you’ll thrive as part of the Amgen team. Join us and transform the lives of patients while transforming your career. [Data Engineer] What you will do Let’s do this. Let’s change the world. In this vital role you will be responsible for designing, building, maintaining, analyzing, and interpreting data to provide actionable insights that drive business decisions. This role involves working with large datasets, developing reports, supporting and executing data governance initiatives and, visualizing data to ensure data is accessible, reliable, and efficiently managed. The ideal candidate has strong technical skills, experience with big data technologies, and a deep understanding of data architecture and ETL processes Roles & Responsibilities: Ø Design, develop, and maintain data solutions for data generation, collection, and processing Ø Be a key team member that assists in design and development of the data pipeline Ø Create data pipelines and ensure data quality by implementing ETL processes to migrate and deploy data across systems Ø Contribute to the design, development, and implementation of data pipelines, ETL/ELT processes, and data integration solutions Ø Take ownership of data pipeline projects from inception to deployment, manage scope, timelines, and risks Ø Collaborate with cross-functional teams to understand data requirements and design solutions that meet business needs Ø Develop and maintain data models, data dictionaries, and other documentation to ensure data accuracy and consistency Ø Implement data security and privacy measures to protect sensitive data Ø Leverage cloud platforms (AWS preferred) to build scalable and efficient data solutions Ø Collaborate and communicate effectively with product teams Ø Collaborate with Data Architects, Business SMEs, and Data Scientists to design and develop end-to-end data pipelines to meet fast paced business needs across geographic regions Ø Identify and resolve complex data-related challenges Ø Adhere to best practices for coding, testing, and designing reusable code/component Ø Explore new tools and technologies that will help to improve ETL platform performance Ø Participate in sprint planning meetings and provide estimations on technical implementation What we expect of you We are all different, yet we all use our unique contributions to serve patients. Basic Qualifications: Basic Qualifications and Experience: Master's degree / Bachelor's degree and 5 to 9 years Computer Science, IT or related field experience Functional Skills: Must-Have Skills Hands on experience with big data technologies and platforms, such as Databricks, Apache Spark (PySpark, SparkSQL), workflow orchestration, performance tuning on big data processing Proficiency in data analysis tools (eg. SQL) and experience with data visualization tools Excellent problem-solving skills and the ability to work with large, complex datasets Knowledge of data protection regulations and compliance requirements (e.g., GDPR, CCPA) Good-to-Have Skills: Experience with ETL tools such as Apache Spark, and various Python packages related to data processing, machine learning model development Strong understanding of data modeling, data warehousing, and data integration concepts Knowledge of Python/R, Databricks, SageMaker, cloud data platforms Soft Skills: Excellent critical-thinking and problem-solving skills Strong communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated presentation skills What you can expect of us As we work to develop treatments that take care of others, we also work to care for your professional and personal growth and well-being. From our competitive benefits to our collaborative culture, we’ll support your journey every step of the way. In addition to the base salary, Amgen offers competitive and comprehensive Total Rewards Plans that are aligned with local industry standards. Apply now and make a lasting impact with the Amgen team. careers.amgen.com As an organization dedicated to improving the quality of life for people around the world, Amgen fosters an inclusive environment of diverse, ethical, committed and highly accomplished people who respect each other and live the Amgen values to continue advancing science to serve patients. Together, we compete in the fight against serious disease. Amgen is an Equal Opportunity employer and will consider all qualified applicants for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, protected veteran status, disability status, or any other basis protected by applicable law. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation.
Posted 4 days ago
2.0 - 3.0 years
2 - 6 Lacs
Hyderābād
Remote
We are looking for a highly motivated and skilled Generative AI (GenAI) Developer to join our dynamic team. You will be responsible for building and deploying GenAI solutions using large language models (LLMs) to address real-world business challenges. The role involves working with cross-functional teams, applying prompt engineering and fine-tuning techniques, and building scalable AI-driven applications. A strong foundation in machine learning, NLP, and a passion for emerging GenAI technologies is essential. Responsibilities Design, develop, and implement GenAI solutions using large language models (LLMs) to address specific business needs using Python. Collaborate with stakeholders to identify opportunities for GenAI integration and translate requirements into scalable solutions. Preprocess and analyze unstructured data (text, documents, etc.) for model training, fine-tuning, and evaluation. Apply prompt engineering, fine-tuning, and RAG (Retrieval-Augmented Generation) techniques to optimize LLM outputs. Deploy GenAI models and APIs into production environments, ensuring performance, scalability, and reliability. Monitor and maintain deployed solutions, incorporating improvements based on feedback and real-world usage. Stay up to date with the latest advancements in GenAI, LLMs, and orchestration tools (e.g., LangChain, LlamaIndex). Write clean, maintainable, and well-documented code, and contribute to team-wide code reviews and best practices. Requirements 2-3 years of relevant Proven experience as an AI Developer. Proficiency in Python Good understanding multiple of Gen AI models (OpenAI, LLAMA2, Mistral) and ability to setup up local GPTs using ollama, lm studio etc. Experience with LLMs, RAG (Retrieval-Augmented Generation), and vector databases (e.g., FAISS, Pinecone). Multi agents frameworks to create workflows Langchain or similar tools like lamaindex, langgraph etc. Knowledge of Machine Learning frameworks, libraries, and tools. Excellent problem-solving skills and solution mindset Strong communication and teamwork skills. Ability to work independently and manage ones time effectively. Experience with any of cloud platforms (AWS, GCP, Azure). Benefits Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centres. Work-Life Balance: Accellor prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training, Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Personal Accident Insurance, Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Disclaimer: - Accellor is proud to be an equal opportunity employer. We do not discriminate in hiring or any employment decision based on race, color, religion, national origin, age, sex (including pregnancy, childbirth, or related medical conditions), marital status, ancestry, physical or mental disability, genetic information, veteran status, gender identity or expression, sexual orientation, or other applicable legally protected characteristic
Posted 4 days ago
3.0 years
2 - 7 Lacs
Hyderābād
On-site
DESCRIPTION Are you an innovative and accomplished professional seeking for a role with a significant impact and growth? Amazon is looking for a dynamic Software Development Engineer to join our Core Services team under Worldwide Customer purchase Journey. The Shipping and Region Authority (SARA) organization innovates on foundational products that shape the customer shopping journey, beginning from the gateway page of their visit through search and discovery experiences. SARA’s products also help drive checkout and fulfillment customer experiences. Through a complex orchestration of its four domains (Shipping, Regions, Locations, Restrictions), SARA influences and frames the shopping CX. Our systems are architected for scale and consistency, offering configurable, flexible, and global solutions (standardized globally but customized for local regulations). We integrate with multiple cross technology and functional services to identify customer locations , identify the shipping options and apply sales and shipping restrictions. In this role, you will scope complex projects and deliver simple, elegant solutions by collecting product and business requirements, driving the development schedule from design to release, making appropriate trade-offs to optimize time-to-market, and clearly communicating goals, roles, responsibilities, and desired outcomes to internal cross-functional teams. You will interact with a broad cross-section of the Amazon organization, clarify ambiguous issues, and negotiate effective technical solutions between development and business teams. You will anticipate bottlenecks and escalate issues when required to ensure on-time delivery. This role requires a seasoned individual with excellent experience as a Software Development Engineer for distributed SOA software systems and the ability to guide high-level technical design while considering potential future areas of fraud our platform might encounter. Key job responsibilities Collaborate with experienced cross-disciplinary Amazonian to conceive, design, and bring innovative products and services to market. Design and build innovative technologies in a large distributed computing environment and help lead fundamental changes in the industry. Create solutions to run predictions on distributed systems with exposure to innovative technologies at incredible scale and speed. Build distributed storage, index, and query systems that are scalable, fault-tolerant, low cost, and easy to manage/use. Design and code the right solutions starting with broadly defined problems. Work in an agile environment to deliver high-quality software. BASIC QUALIFICATIONS 3+ years of non-internship professional software development experience 2+ years of non-internship design or architecture (design patterns, reliability and scaling) of new and existing systems experience 3+ years of Video Games Industry (supporting title Development, Release, or Live Ops) experience Experience programming with at least one software programming language Bachelor's degree or equivalent PREFERRED QUALIFICATIONS 3+ years of full software development life cycle, including coding standards, code reviews, source control management, build processes, testing, and operations experience Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Job details IND, TS, Hyderabad Software Development
Posted 4 days ago
2.0 years
7 - 7 Lacs
Gurgaon
On-site
Job Purpose As a key member of the DTS team, seeking an exceptionally talented software developer who is willling to relocate to Costa Rica and to join with a strong background in building robust and scalable backend systems. You will be part of a team that plays a critical role in supporting research and portfolio generation through advanced technology solutions. The team is involved throughout the entire software development lifecycle—from planning and development to deployment and operations—and also provides second-line production support. Desired Skills and Experience Essential skills 2+ years of hands-on experience as a developer Strong Data Structures, Algorithms fundamentals Knowledge of Python and Linux/Unix platforms; familiarity with scripting languages Experience designing and maintaining distributed system architectures Hands-on experience developing and maintaining backend services in Python Familiarity with data processing and orchestration technologies such as Spark, Kafka, Airflow, and Kubernetes Experience with monitoring tools like Prometheus, Grafana, Sentry, and Alerta Experience in finance is a plus Key Responsibilities Design and develop scalable, robust software applications with a focus on backend systems and data-intensive workflows. Build and maintain complex data pipelines and frameworks for strategy and performance analytics Work with technologies such as Spark, Kafka, Kubernetes, and modern monitoring tools. Apply strong debugging and problem-solving skills to ensure system reliability and performance. Demonstrate a solid understanding of data structures, algorithms, object-oriented programming, and MVC web frameworks. Operate effectively in Unix/Linux environments with exposure to caching tools, queuing systems, and data visualization platforms. Key Metrics Python, and Spark Software Engineer, Data Structures and Algorithms Behavioral Competencies Good communication English (verbal and written), Critical thinking, Attention to detail Experience in managing client stakeholders
Posted 4 days ago
7.0 years
0 Lacs
Gurgaon
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking an experienced Lead Platform Engineer to join our Automation Engineering team. The ideal candidate will excel in cloud infrastructure automation, generative AI, and machine learning, with a strong foundation in DevOps practices and modern scripting tools. This role involves designing cutting-edge AI-driven solutions for AIOps while innovating cloud automation processes to optimize operational efficiency. Responsibilities Design and develop automated workflows for cloud infrastructure provisioning using IaC tools like Terraform Build frameworks to support deployment, configuration, and management across diverse cloud environments Develop and manage service catalog components, ensuring integration with platforms like Backstage Implement GenAI models to enhance service catalog functionality and code quality across automation pipelines Design and implement CI/CD pipelines and maintain CI pipeline code for cloud automation use cases Write scripts to support cloud deployment orchestration using Python, Bash, or other scripting languages Design and deploy generative AI models for AIOps applications such as anomaly detection and predictive maintenance Work with frameworks like LangChain or cloud platforms such as Bedrock, Vertex AI, and Azure AI to deploy RAG workflows Build and optimize vector databases and document sources using tools like OpenSearch, Amazon Kendra, or equivalent solutions Prepare and label data for generative AI models, ensuring scalability and integrity Create agentic workflows using frameworks like Langraph or cloud GenAI platforms such as Bedrock Agents Integrate generative AI models with operational systems and AIOps platforms for enhanced automation Evaluate AI model performance and ensure continuous optimization over time Develop and maintain MLOps pipelines to monitor and mitigate model decay Collaborate with cross-functional teams to drive innovation and improve cloud automation processes Research and recommend new tools and best practices to enhance operational efficiency Requirements Bachelor's or Master's degree in Computer Science, Engineering, or a related field 7+ years of experience in cloud infrastructure automation, scripting, and DevOps Strong proficiency in IaC tools like Terraform, CloudFormation, or similar Expertise in Python, cloud AI frameworks such as LangChain, and generative AI workflows Demonstrated background in developing and deploying AI models such as RAG or transformers Proficiency in building vector databases and document sources using solutions like OpenSearch or Amazon Kendra Competency in preparing and labeling datasets for AI models and optimizing data inputs Familiarity with cloud platforms including AWS, Google Cloud, or Azure Capability to implement MLOps pipelines and monitor AI system performance Nice to have Knowledge of agentic architectures such as React and flow engineering techniques Background in using Bedrock Agents or Langraph for workflow creation Understanding of integrating generative AI into legacy or complex operational systems We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 4 days ago
3.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics D&A – SSIS- Senior We’re looking for Informatica or SSIS Engineers with Cloud Background (AWS, Azure) Primary skills: Has played key roles in multiple large global transformation programs on business process management Experience in database query using SQL Should have experience working on building/integrating data into a data warehouse. Experience in data profiling and reconciliation Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Developed expertise in complex data management or Application integration solution and deployment in areas of data migration, data integration, application integration or data quality. Experience in data processing, orchestration, parallelization, transformations and ETL Fundamentals. Leverages on variety of programming languages & data crawling/processing tools to ensure data reliability, quality & efficiency (optional) Experience in Cloud Data-related tool (Microsoft Azure, Amazon S3 or Data lake) Knowledge on Cloud infrastructure and knowledge on Talend cloud is an added advantage Knowledge of data modelling principles. Knowledge in Autosys scheduling Good experience in database technologies. Good knowledge in Unix system Responsibilities: Need to work as a team member to contribute in various technical streams of Data integration projects. Provide product and design level technical best practices Interface and communicate with the onsite coordinators Completion of assigned tasks on time and regular status reporting to the lead Building a quality culture Use an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates Strong communication, presentation and team building skills and experience in producing high quality reports, papers, and presentations. Experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. Qualification: BE/BTech/MCA (must) with an industry experience of 3 -7 years. Experience in Talend jobs, joblets and customer components. Should have knowledge of error handling and performance tuning in Talend. Experience in big data technologies such as sqoop, Impala, hive, Yarn, Spark etc. Informatica PowerCenter/IBM-DataStage/ SSIS development Strong proficiency in SQL/PLSQL Good experience in performance tuning ETL workflows and suggest improvements. Atleast experience of minimum 3-4 clients for short duration projects ranging between 6-8 + months OR Experience of minimum 2+ clients for duration of projects ranging between 1-2 years or more than that People with commercial acumen, technical experience and enthusiasm to learn new things in this fast-moving environment EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 4 days ago
3.0 - 8.0 years
0 Lacs
Gurgaon
On-site
EPAM is a leading global provider of digital platform engineering and development services. We are committed to having a positive impact on our customers, our employees, and our communities. We embrace a dynamic and inclusive culture. Here you will collaborate with multi-national teams, contribute to a myriad of innovative projects that deliver the most creative and cutting-edge solutions, and have an opportunity to continuously learn and grow. No matter where you are located, you will join a dedicated, creative, and diverse community that will help you discover your fullest potential. We are seeking a skilled Platform Engineer to join our Automation Engineering team, bringing expertise in cloud infrastructure automation, DevOps, scripting, and advanced AI/ML practices. The role focuses on integrating generative AI into automation workflows, enhancing operational efficiency, and supporting cloud-first initiatives. Responsibilities Design cloud automation workflows using Infrastructure-as-Code tools such as Terraform or CloudFormation Build scalable frameworks to manage infrastructure provisioning, deployment, and configuration across multiple cloud platforms Create service catalog components compatible with automation platforms like Backstage Integrate generative AI models to improve service catalog functionalities, including automated code generation and validation Architect CI/CD pipelines for automated build, test, and deployment processes Maintain deployment automation scripts utilizing technologies such as Python or Bash Implement generative AI models (e.g., RAG, agent-based workflows) for AIOps use cases like anomaly detection and root cause analysis Employ AI/ML tools such as LangChain, Bedrock, Vertex AI, or Azure AI for advanced generative AI solutions Develop vector databases and document sources using services like Amazon Kendra, OpenSearch, or custom solutions Engineer data pipelines to stream real-time operational insights that support AI-driven automation Build MLOps pipelines to deploy and monitor generative AI models, ensuring optimal performance and avoiding model decay Select appropriate LLM models for specific AIOps use cases and integrate them effectively into workflows Collaborate with cross-functional teams to design and refine automation and AI-driven processes Research emerging tools and technologies to enhance operational efficiency and scalability Requirements Bachelor's or Master's degree in Computer Science, Engineering, or related field 3-8 years of experience in cloud infrastructure automation, DevOps, and scripting Proficiency with Infrastructure-as-Code tools such as Terraform or CloudFormation Expertise in Python and generative AI frameworks like RAG and agent-based workflows Knowledge of cloud-based AI services, including Bedrock, Vertex AI, or Azure AI Familiarity with vector databases like Amazon Kendra, OpenSearch, or custom database solutions Competency in data engineering tasks such as feature engineering, labeling, and real-time data streaming Proven track record in creating and maintaining MLOps pipelines for AI/ML models in production environments Nice to have Background in Flow Engineering tools such as Langraph or platform-specific workflow orchestration tools Understanding of comprehensive AIOps processes to refine cloud-based automation solutions We offer Opportunity to work on technical challenges that may impact across geographies Vast opportunities for self-development: online university, knowledge sharing opportunities globally, learning opportunities through external certifications Opportunity to share your ideas on international platforms Sponsored Tech Talks & Hackathons Unlimited access to LinkedIn learning solutions Possibility to relocate to any EPAM office for short and long-term projects Focused individual development Benefit package: Health benefits Retirement benefits Paid time off Flexible benefits Forums to explore beyond work passion (CSR, photography, painting, sports, etc.)
Posted 4 days ago
3.0 years
0 Lacs
Gurgaon
On-site
System Administrator Job Description We aim to bring about a new paradigm in medical image diagnostics; providing intelligent, holistic, ethical, explainable and patient centric care. We are looking for innovative problem solvers who love solving problems. We want people who can empathize with the consumer, understand business problems, and design and deliver intelligent products. We are looking for a System Administrator to manage and optimize our on-premise and cloud infrastructure, ensuring reliability, security, and scalability for high-throughput AI workloads. As a System Administrator, you will be responsible for managing servers, storage, network, and compute infrastructure powering our AI development and deployment pipelines. You will ensure seamless handling of large medical imaging datasets (DICOM/NIfTI), maintain high availability for research and production systems Key Responsibilities Infrastructure & Systems Management Manage Linux-based servers, GPU clusters, and network storage for AI training and inference workloads. Configure and maintain message queue systems (RabbitMQ, ActiveMQ, Kafka) for large-scale, asynchronous AI pipeline execution. Set up and maintain service beacons and health checks to proactively monitor the state of critical services (XNAT pipelines, FastAPI endpoints, AI model inference servers). Maintain PACS integration, DICOM routing, and high-throughput data transfer for medical imaging workflows. Manage hybrid infrastructure (on-prem + cloud) including auto-scaling compute for large training tasks. Service Monitoring & Reliability Implement automated service checking for all production and development services using Prometheus, Grafana, or similar tools. Configure beacon agents to trigger alerts and self-healing scripts for service restarts when anomalies are detected. Set up log aggregation and anomaly detection to catch failures in AI processing pipelines early. Ensure 99.9% uptime for mission-critical systems and clinical services. Security & Compliance Enforce secure access control (IAM, VPN, RBAC, MFA) and maintain audit trails for all system activities. Ensure compliance with HIPAA, GDPR, ISO 27001 for medical data storage and transfer. Encrypt medical imaging data (DICOM/NIfTI) at rest and in transit. Automation & DevOps Develop automation scripts for service restarts, scaling GPU resources, and pipeline deployments. Work with DevOps teams to integrate infrastructure monitoring with CI/CD pipelines. Optimize AI pipeline orchestration with MQ-based task handling for scalable performance. Backup, Disaster Recovery & High Availability Manage data backup policies for medical datasets, AI model artifacts, and PostgreSQL/MongoDB databases. Implement failover systems for MQ brokers and imaging data services to ensure uninterrupted AI processing. Collaboration & Support Work closely with AI engineers and data scientists to optimize compute resource utilization. Support teams in troubleshooting infrastructure and service issues. Maintain license servers and specialized imaging software environments. Skills and Qualifications Required: 3+ years of Linux systems administration experience with a focus on service monitoring and high-availability environments. Experience with message queues (RabbitMQ, ActiveMQ, Kafka) for distributed AI workloads. Familiarity with beacons, service health monitoring, self-healing automation. Experience managing GPU clusters (NVIDIA CUDA, drivers, dockerized AI workflows). Hands-on with cloud platforms (AWS, GCP, Azure). Networking fundamentals (firewalls, VPNs, load balancers). Hands-on experience with GPU-enabled servers (NVIDIA CUDA, drivers, dockerized AI workflows). Experience managing large datasets (100GB–TB scale), preferably in healthcare or scientific research. Familiarity with cloud platforms (AWS EC2, S3, EKS or equivalents). Knowledge of cybersecurity best practices and compliance frameworks (HIPAA, ISO 27001). Preferred: Experience with PACS, XNAT, or medical imaging servers. Familiarity with Prometheus, Grafana, ELK stack, SaltStack beacons, or similar monitoring tools. Knowledge of Kubernetes or Docker Swarm for container orchestration. Basic scripting knowledge (Bash, Python) for task automation. Exposure to database administration (PostgreSQL, MongoDB). Scripting skills (Bash, Python, PowerShell) for automation and troubleshooting. Understanding of databases (PostgreSQL, MongoDB) used in AI pipelines. Education: BE/B Tech, MS/M Tech (will be a bonus) Experience: 3-5 Years Job Type: Full-time Work Location: In person
Posted 4 days ago
7.0 years
0 Lacs
Delhi
Remote
Join Tether and Shape the Future of Digital Finance At Tether, we’re not just building products, we’re pioneering a global financial revolution. Our cutting-edge solutions empower businesses—from exchanges and wallets to payment processors and ATMs—to seamlessly integrate reserve-backed tokens across blockchains. By harnessing the power of blockchain technology, Tether enables you to store, send, and receive digital tokens instantly, securely, and globally, all at a fraction of the cost. Transparency is the bedrock of everything we do, ensuring trust in every transaction. Innovate with Tether Tether Finance: Our innovative product suite features the world’s most trusted stablecoin, USDT , relied upon by hundreds of millions worldwide, alongside pioneering digital asset tokenization services. But that’s just the beginning: Tether Power: Driving sustainable growth, our energy solutions optimize excess power for Bitcoin mining using eco-friendly practices in state-of-the-art, geo-diverse facilities. Tether Data: Fueling breakthroughs in AI and peer-to-peer technology, we reduce infrastructure costs and enhance global communications with cutting-edge solutions like KEET , our flagship app that redefines secure and private data sharing. Tether Education : Democratizing access to top-tier digital learning, we empower individuals to thrive in the digital and gig economies, driving global growth and opportunity. Tether Evolution : At the intersection of technology and human potential, we are pushing the boundaries of what is possible, crafting a future where innovation and human capabilities merge in powerful, unprecedented ways. Why Join Us? Our team is a global talent powerhouse, working remotely from every corner of the world. If you’re passionate about making a mark in the fintech space, this is your opportunity to collaborate with some of the brightest minds, pushing boundaries and setting new standards. We’ve grown fast, stayed lean, and secured our place as a leader in the industry. If you have excellent English communication skills and are ready to contribute to the most innovative platform on the planet, Tether is the place for you. Are you ready to be part of the future? About the job We are seeking a highly skilled Lead DevOps Engineer to: Lead and guide a team of DevOps specialists Architect, implement, and help maintain CI/CD pipelines using GitHub Deploy and manage critical infrastructure The ideal candidate will need extensive experience with Docker, JavaScript package publishing to NPM, automating mobile app build processes, etc. to name a few. A deep expertise in Linux system administration and networking will ensure scalable, secure, and highly available deployments. Responsibilities Mentor and lead a team of DevOps specialists, promoting best practices, documentation, and knowledge sharing. Collaborate cross‑functionally (Dev, QA, Management etc.) to enhance deployment quality, observability, and stability. Implement monitoring, logging, alerting into systems to proactively detect issues and maintain system health. Design the architecture, implementation, and management of end-to-end CI/CD pipelines in GitHub Actions, ensuring rapid and reliable software delivery. Design and enforce test-driven deployment systems, integrating automated testing at every stage to maintain code quality and accelerate feedback loops. Oversee server system administration, including configuration, monitoring, patching, and troubleshooting. Keep up to date on industry trends and best practices, and evaluate and integrate new DevOps tools and processes. 7+ years in DevOps/Infrastructure roles, with at least 2-3 in a leadership/technical lead capacity. Expertise in containerization technologies—Docker image creation, registry management, and basic orchestration patterns. Hands-on experience managing JavaScript packages and publishing workflows to NPM, with a solid understanding of semantic versioning. Understanding of C++ build systems, specifically CMake, and experience optimizing native code pipelines using Github Actions. Strong Linux system administration and networking expertise, including shell scripting, package management, system performance troubleshooting, firewalls, and VPNs to secure and optimize deployments. Excellent leadership, problem-solving, and communication skills. Bachelor’s or Master’s degree in Computer Science, Engineering, or a related discipline. Important information for candidates Recruitment scams have become increasingly common. To protect yourself, please keep the following in mind when applying for roles: Apply only through our official channels. We do not use third-party platforms or agencies for recruitment unless clearly stated. All open roles are listed on our official careers page: https://tether.recruitee.com/ Verify the recruiter’s identity. All our recruiters have verified LinkedIn profiles. If you’re unsure, you can confirm their identity by checking their profile or contacting us through our website. Be cautious of unusual communication methods. We do not conduct interviews over WhatsApp, Telegram, or SMS. All communication is done through official company emails and platforms. Double-check email addresses. All communication from us will come from emails ending in @ tether.to or @ tether.io We will never request payment or financial details. If someone asks for personal financial information or payment at any point during the hiring process, it is a scam. Please report it immediately. When in doubt, feel free to reach out through our official website.
Posted 4 days ago
5.0 years
0 Lacs
Delhi
Remote
job Descrption : Sr. Python: Agent Development Positions:5 Contract Experience5 – 8 Years Role Overview: This role is for an experienced and high-caliber engineer with strong proficiency in Python and hands-on expertise integrating Large Language Models (LLMs) into production systems. This individual contributor (IC) role focuses on designing robust backend services and orchestrating LLM workflows. Please note: this is a software engineering role, not one in data annotation, data science, or analytics. What does day-to-day look like: Designing and implementing production-grade APIs and backend services in Python Building and maintaining LLM-powered systems, including agent frameworks and orchestration using tools like LangChain and LangGraph Efficiently integrating LLM calls into backend workflows Collaborating across functions to develop reliable, maintainable systems Requirements: 5+ years of experience as a backend or full-stack engineer with a strong backend focus Advanced proficiency in Python Practical experience integrating LLMs (e.g., RAG pipelines, agent frameworks, LangChain, LangGraph, or similar) Background in machine learning engineering is a strong plus Solid understanding of service architecture and production deployment workflows Perks of Freelancing With Us: Work in a fully remote environment. Opportunity to work on cutting-edge AI projects with leading LLM companies. Potential for contract extension based on performance and project needs. Offer Details: Commitments Required: Full time 8 hours per day with a 4-hour overlap with PST Employment type : Contractor position (no medical/paid leave) Duration of contract : 3 month; [expected start date is next week] Location : India, Pakistan, Nigeria, Kenya, Egypt, Ghana, Bangladesh, Turkey, Mexico Evaluation Process (approximately 75 mins) : Two rounds of interviews (60 min technical + 30 min technical & cultural discussion Job Type: Full-time Pay: ₹200,000.00 - ₹2,500,000.00 per year Schedule: Day shift Work Location: In person
Posted 4 days ago
8.0 years
0 Lacs
Delhi
Remote
Senior Solution GTM Lead New Delhi, Delhi, India Date posted Jul 29, 2025 Job number 1851602 Work site Up to 50% work from home Travel 0-25 % Role type Individual Contributor Profession Marketing Discipline Field Product Marketing Employment type Full-Time Overview The Sales Enablement & Operations (SE&O) team plays an essential role in translating Microsoft’s Commercial Strategy to a local execution plan and driving operational excellence to achieve the greatest results possible. Our team drives cross-Region, cross-Area and cross-Subsidiary insight and execution excellence, bringing strategy and priorities to life by accelerating the pace of transformation and enabling Microsoft to deliver business impact at scale. As the Activation GTM Manager for Cloud & AI for India , you will accelerate revenue growth, boost field agility, and deliver results with our field sellers by deepening your partnership with key stakeholders across India, including Sales Excellence, sales, marketing, consulting, customer success, and partner functions, supporting One Microsoft. You will focus on driving alignment across processes and tools, leading with a cross-solution approach to optimize pipelines, ensuring effective communication and flawless execution, and leveraging insights to drive data-driven decision-making. This role is critical to driving customer adoption at scale, driven by deep solution play domain, product truth, and partnerships with sales, marketing, operations, and sales excellence. We’re looking for a highly driven, motivated marketing or sales individual to join our Go-to-Market (GTM) team. This role requires someone who acts as a thought leader, tracks success criteria and performance metrics, works with emergent technology, creates alignment and action across teams, removes roadblocks, and simplifies complex concepts. This individual truly lives for big challenges. This opportunity will allow you to accelerate your career growth, develop deep business acumen, and hone your leadership skills. This role is flexible in that you can work up to 50% from home. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Qualifications Required (RQs) 8+ years marketing strategy, business planning, sales enablement, business development, technical pre-sales or related work experience OR equivalent experience Preferred Qualifications (PQs) 8+ years experience managing and expanding a product/solution portfolio and driving demand generation and pipeline acceleration within a complex (e.g., multinational or matrixed) organization OR equivalent experience Responsibilities Growth Strategy & Business Performance: In partnership with the India Activation team and the Activation GTM Leader, you will execute Cloud & AI Platforms (CAIP) solution plays to enhance ACR performance in partnership with Solution Play GTM teams. Responsible for new pipeline creation, addressing pipeline gaps by OU/segment, and implementing global strategies relevantly for your Area/Subsidiaries, including FY26 CAIP program performance. Sales Activation : In partnership with the India Activation team and the Activation GTM Leader, lead end-to-end solution play field activation to win customers across the CAIP solution plays including X-CSA plays such as Agentic. Resolve blockers and influence strategic improvements through field feedback loops. Partner closely with the Solution Play GTM teams and Area GTM ICs to deliver field readiness and skilling as well as capture and share insights on customer wins/losses, compete trends, and partner feedback. Demand Generation : In partnership with the India Activation team and the Activation GTM Leader, align and orchestrate the execution of CAIP marketing plan with Integrated Marketing Managers (IMM), Area GTM ICs and Partner roles. Push for signal conversion to create pipe and ensure successful customer targeting events. Provide content input and shape agendas to amplify CAIP priorities. Product Leadership : Act as a strong Azure and CAIP solutions advocate by demonstrating thought leadership externally with customers and partners, and internally with Corp. Champion local needs and insights to shape global product strategy, roadmap, and readiness through structured feedback loops. Understand industry trends, challenges, and regulatory requirements. MACC + Unified Stewardship : Steward MACC by advancing acquisition strategy, expanding the scope of workloads in the MACCs, accelerating ACR via Unified, expanding Unified accounts and driving data-driven optimization. Partner strongly with India Activation Director and the Activation GTM Leader to drive improvement in MACC penetration significantly. Operational Excellence : Partner closely with Regional and Area Sales Excellence and Sales Operations to drive sales discipline, pipeline creation and acceleration, and MCEM orchestration. Promote consistent use of programs and investments as well as standardized services and tools in the field sales teams. Partner with Area Activation Director to drive a connected ROB that tracks end-to-end business health, aligning sales and marketing insights with Corp through VSU, IAP, and other key cadences. You will exemplify Microsoft Values, Culture, Leadership Principles and create clarity by creating a shared understanding. Benefits/perks listed below may vary depending on the nature of your employment with Microsoft and the country where you work. Industry leading healthcare Educational resources Discounts on products and services Savings and investments Maternity and paternity leave Generous time away Giving programs Opportunities to network and connect Microsoft is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 4 days ago
0.0 - 5.0 years
5 - 19 Lacs
HSR Layout, Bengaluru, Karnataka
On-site
Data Engineering / Tech Lead – Experience: 4+ years About Company InspironLabs is a GenAI-driven software services company focused on building AI-powered, scalable digital solutions. Our skilled team delivers intelligent applications tailored to specific business challenges, using AI and Generative AI (GenAI) to accelerate innovation. Key strengths include: AI & GenAI Focus – Harnessing AI and Generative AI to deliver smarter solutions. Scalable Tech Stack – Building future-ready systems for performance and resilience. Proven Enterprise Experience – Deploying solutions across industries and geographies. To know more, visit: www.inspironlabs.com Key Responsibilities • Design, implement, and maintain robust data pipelines. • Collaborate with data scientists and analysts for integrated solutions. • Mentor junior engineers and manage project timelines. Required Skills • Experience with Spark, Hadoop, Kafka. • Expertise in SQL, Python, cloud data platforms (AWS/GCP/Azure). • Hands-on with orchestration tools like Airflow, DBT. Qualifications Experience: 4 to 5 years in data engineering roles. Bachelor’s in Computer Science, Engineering, or related field. Place of Work In Office – Bangalore Job Type Full Time Job Type: Full-time Pay: ₹560,716.53 - ₹1,944,670.55 per year Benefits: Flexible schedule Health insurance Paid sick time Paid time off Provident Fund Ability to commute/relocate: HSR Layout, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Please mention your notice period? What is your current CTC? Work Location: In person
Posted 4 days ago
2.0 years
2 - 8 Lacs
Mohali
On-site
We are seeking a DevOps Engineer with strong experience in CI/CD pipelines, cloud infrastructure, automation, and networking . The ideal candidate will ensure seamless deployment, high system reliability, and secure networking practices. Key Responsibilities: Design, build, and maintain CI/CD pipelines (e.g., Jenkins, GitLab CI) Automate infrastructure provisioning using tools like Terraform, Ansible, etc. Manage and optimize cloud infrastructure (AWS, Azure, GCP) Implement and manage containerized applications using Docker and Kubernetes Monitor system performance, availability, and security Configure and manage internal networks, VPNs, firewalls, and load balancers Troubleshoot networking issues and ensure minimal downtime Maintain network documentation and ensure adherence to security standards Collaborate with developers and QA to support smooth deployments and scalability Implement system monitoring, alerting, and logging (e.g., Prometheus, Grafana, ELK stack) Required Skills and Qualifications: 2–5 years of experience as a DevOps Engineer or similar role Hands-on experience with cloud platforms and infrastructure-as-code tools Strong scripting skills (Bash, Shell, Python, etc.) Solid understanding of computer networking (TCP/IP, DNS, VPN, firewalls) Experience with containerization and orchestration (Docker, Kubernetes) Familiarity with Linux/Unix-based systems Good understanding of network protocols and troubleshooting tools Preferred Qualifications: Bachelor’s degree in Computer Science, Information Technology, or related field Certifications in AWS/Azure/GCP or networking (CCNA, etc.) are a plus Job Type: Full-time Pay: ₹17,776.87 - ₹69,135.46 per month Work Location: In person Speak with the employer +91 9872235857
Posted 4 days ago
2.0 years
3 - 4 Lacs
Mohali
On-site
We are looking for a highly motivated GenAI Engineer with strong hands-on experience working with Large Language Models (LLMs), Retrieval-Augmented Generation (RAG) workflows, and production-ready AI applications. You’ll help design, build, and extend digital products and creative applications that leverage the latest in LLM technologies. You will play a lead role in product development & offering AI services to clients, client onboarding, and delivery of cutting-edge AI solutions, working with a range of modern AI tools, cloud services, and frameworks. Experience: 2 + Years Location: Mohali, Punjab Work Mode: On-site Timings: 10:00 AM – 7:00 PM (Day Shift) Interview Mode: Face-to-Face ( On-Site) Contact: +91-9872993778 (Mon–Fri, 11 AM – 6 PM) Key Responsibilities: Design and implement generative AI solutions using large language models (LLMs), natural language processing (NLP), and computer vision. Develop, enhance, and scale digital products leveraging LLMs at their core. Lead product development and operations teams to implement GenAI-based solutions. Design and manage client onboarding, rollout, and adoption strategies. Deliver and maintain enhancements based on client-specific needs. Build and maintain RAG pipelines and LLM-based workflows for enterprise applications. Manage LLMOps processes across the entire AI lifecycle (prompt design, fine-tuning, evaluation). Work with cloud-based GenAI platforms (primarily Azure OpenAI, but also Google, AWS, etc.). Implement API integrations, orchestration, and workflow automation. Evaluate, fine-tune, and monitor performance of LLM outputs using observability tools. Required Qualifications: Bachelor’s degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field — or equivalent hands-on experience. Minimum 2 years of hands-on experience in software development or applied machine learning. Programming (preferred): Python, JavaScript. Voice AI: ElevenLabs, Twilio, understanding of ASR (Automatic Speech Recognition) and NLU (Natural Language Understanding). Automation/Integration: n8n (or Make.com, Zapier, Activepieces), API integration (RESTful APIs, Webhooks), JSON. Proficiency in Azure AI services, including: Azure OpenAI (GPT-4, Codex, etc. Azure Machine Learning for model development and deployment Proven experience with LLM APIs (OpenAI, Azure OpenAI, Gemini, Claude, etc.). Solid hands-on experience in building and deploying RAG pipelines. Proficiency in Python and strong knowledge of Python ecosystems and libraries. Familiarity with core GenAI frameworks: LangChain, LangGraph , LlamaIndex, etc. Experience with vector databases: FAISS, Milvus, Azure AI Search, etc. Practical knowledge of embeddings, model registries (e.g., Hugging Face), and LLM APIs. Experience in prompt engineering, tool/function calling, and structured outputs (Pydantic/JSON Schema). Exposure to LLM observability tools: LangSmith, LangFuse , etc. Strong Git, API, and cloud platform (AWS, GCP, Azure) experience. Job Type: Full-time Pay: ₹25,000.00 - ₹40,000.00 per month Schedule: Day shift Monday to Friday Experience: Gen AI Engineer: 2 years (Preferred) Work Location: In person
Posted 4 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough