Jobs
Interviews

691 Api Gateway Jobs - Page 22

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

22 - 32 Lacs

Bangalore Rural, Bengaluru

Hybrid

We're Hiring for .Net Core Developer for Bangalore Location only Job Title: .Net Core Developer Experience: 8+Years Location: Bangalore Employment Type: Full-Time Job Description: Overview: We are looking for a Senior Software Engineer to work closely with the development team to design and implement integrations with various client systems and applications. The role involves working across Azure cloud services and building robust, scalable APIs and integration layers. Key Responsibilities: Design, develop, and maintain integrations with client applications and external systems Build and manage REST APIs and services within a microservices architecture Work across Azure Cloud services, including API Gateway, Azure Bus Service, and Azure SQL Implement messaging solutions using RabbitMQ and Kafka Collaborate with cross-functional teams to deliver reliable, scalable integration solutions Required Skills & Experience: Strong hands-on experience in .NET Core and C# Proficiency with REST APIs and API Gateway Experience with Azure SQL Server and Azure Cloud services Knowledge of Azure Bus Service Experience with messaging systems like RabbitMQ and Kafka Familiarity with microservices architecture and integration best practices Experience with version control tools such as GitHub Interested candidates can also share their CV at akanksha.s@esolglobal.com

Posted 2 months ago

Apply

6.0 - 10.0 years

10 - 20 Lacs

Mumbai, Pune, Bengaluru

Hybrid

Zycus is looking for a Senior Consultant - Integration with strong experience working in latest API and integration technologies like MuleSoft, Dell Boomi, Jitterbit, SAP PI/PO, IIB, SnapLogic, Oracle Fusion Middleware) or similar. Candidate should have prior experience of working with e-procurement procure to pay or source to pay” (P2P / S2P) products like SAP ARIBA, COUPA, IVALUA, BASWARE, JAGGAER or any other similar strategic sourcing / e-sourcing / procure2pay / Source2pay product suite.The Senior Consultant - Integration will be responsible responsible for managing multiple & challenging integration projects for Zycus . Handling multi geographical Fortune 1000 Global customers. Key Responsibilities: Deliver end to end integration projects independently using Zycus integration solution where Zycus products would be integrated with customer ERP’s such as SAP, Oracle, Netsuite, Peoplesoft etc. Conduct comprehensive business requirement gathering from stakeholders (Customers). Analyze and translate business requirements into technical specifications. Design efficient and robust integration solutions applying industry best practices. Ensure alignment of integration solutions with business objectives and strategies. GenAI Use Cases: Identify and implement Generative AI (GenAI) use cases to streamline and reduce repetitive tasks in the integration process. Develop automation solutions to enhance efficiency of end-to-end implementation. Ensure timely delivery of integration projects within budget and scope. Conduct regular reviews and assessments of integration processes to ensure continuous improvement. Skills: 6-10 years experience of complex enterprise integration. experience in API and SFTP Integration. Demonstrated experience in developing integrations using API and SFTP technologies. Strong understanding of best practices and industry standards in API and SFTP integration. Ability to design, develop, and troubleshoot API and SFTP-based integration solutions. Proficiency in both event-based and scheduled-based integration methodologies. Ability to design and implement effective integration solutions based on specific project requirements. Experience in managing and optimizing integration workflows. In-depth understanding of data transformation processes, including routing, splitting, and merging. Expertise in designing and implementing seamless integration workflows to ensure data consistency and accuracy. Capability to write JavaScript code and create JOLT specifications for efficient data transformation and manipulation. Experience in utilizing scripting languages to automate integration processes and enhance functionality. Deep knowledge of JSON and XML, with the ability to work seamlessly with these data formats. Proficiency in handling and transforming data in various formats to facilitate smooth integration processes. P2P Products Integration: Familiarity with Procure-to-Pay (P2P) products integration profiles is highly desirable. Experience in integrating with P2P systems to streamline procurement and payment processes. Single Sign-On (SSO): Knowledge of Single Sign-On (SSO) technologies to enhance security and user experience in integrated environments. Experience in implementing SSO solutions to facilitate seamless authentication and access management. Supplier Enablement & Vendor Integration: Preferred experience in integrating with prominent systems such as Coupa, SAP, and Oracle. Ability to create robust and reliable connections with vendor systems through cXML and Punchout Configuration to enhance supplier enablement and integration. Five Reasons Why You Should Join Zycus Cloud Product Company: We are a Cloud SaaS Company and our products are created by using the latest technologies like ML and AI. Our UI is in Angular JS and we are developing our mobile apps using React. A Market Leader: Zycus is recognized by Gartner (world’s leading market research analyst) as a Leader in Procurement Software Suites. Move between Roles: We believe that change leads to growth and therefore we allow our employees to shift careers and move to different roles and functions within the organization Get a Global Exposure: You get to work and deal with our global customers. Create an Impact: Zycus gives you the environment to create an impact on the product and transform your ideas into reality. Even our junior engineers get the opportunity to work on different product features. About Us Zycus is a pioneer in Cognitive Procurement software and has been a trusted partner of choice for large global enterprises for two decades. Zycus has been consistently recognized by Gartner, Forrester, and other analysts for its Source to Pay integrated suite. Zycus powers its S2P software with the revolutionary Merlin AI Suite. Merlin AI takes over the tactical tasks and empowers procurement and AP officers to focus on strategic projects; offers data-driven actionable insights for quicker and smarter decisions, and its conversational AI offers a B2C type user-experience to the end-users.Zycus helps enterprises drive real savings, reduce risks, and boost compliance, and its seamless, intuitive, and easy-to-use user interface ensures high adoption and value across the organization.Start your #CognitiveProcurement journey with us, as you are #MeantforMore.

Posted 2 months ago

Apply

5.0 - 6.0 years

8 - 14 Lacs

Hyderabad

Work from Office

This role demands deep technical acumen, hands-on development expertise, and strong leadership skills to guide integration projects through their full lifecycle. Key Responsibilities : - Define and own the integration architecture, including application, data, and process integration strategies. - Design end-to-end integration solutions using modern platforms such as MuleSoft, Dell Boomi, Azure Logic Apps, Apache Kafka, SAP Integration Suite, etc. - Develop API-first strategies and reusable services to facilitate system interoperability. - Translate business requirements into scalable, secure, and high-performance integration designs. - Implement and manage API gateways, developer portals, and lifecycle management tools. - Design and deploy REST, SOAP, GraphQL services, along with associated documentation using OpenAPI/Swagger standards. - Govern API versioning, throttling, and security policies across internal and external APIs. - Architect event-driven systems using Apache Kafka, ActiveMQ, RabbitMQ, or similar message brokers. - Integrate EDI/B2B protocols and third-party systems using middleware solutions. - Implement data transformation and mapping using tools like XSLT, DataWeave, JSONata, or custom scripts. - Build integrations across cloud platforms (Azure, AWS, GCP) and SaaS applications. - Work with containerized microservices using Docker, Kubernetes, and service mesh (e.g., Istio). - Set up CI/CD pipelines for integration artifacts using Jenkins, GitLab CI, or Azure DevOps. - Define and enforce data governance, security, and compliance across all integrations. - Implement monitoring, alerting, and logging using tools like Splunk, ELK, or Azure Monitor. - Conduct performance tuning, root-cause analysis, and issue resolution in complex integration landscapes. Required Skills & Qualifications : - Bachelors degree in Computer Science, Engineering, or a related field. - 5+ years of hands-on experience in enterprise integration architecture and middleware platforms. - Proven experience with at least one or more major integration platforms : 1. MuleSoft Anypoint Platform 2. Dell Boomi 3. Azure Integration Services (Logic Apps, Functions, API Management) 4. Apache Kafka 5. SAP Integration Suite 6. Informatica Intelligent Cloud Services (IICS) - Proficiency in programming/scripting languages such as Java, Python, JavaScript, or Groovy. - Familiarity with DevOps tools, Git repositories, CI/CD, and agile development methodologies. - Excellent problem-solving skills with strong attention to detail. - Strong communication skills with the ability to translate complex technical concepts for non-technical stakeholders.

Posted 2 months ago

Apply

5.0 - 10.0 years

2 - 5 Lacs

Bengaluru

Work from Office

Position - Rust Developer(AWS) Experience - 5+Years Location - Bangalore List in order of importance, the 5 essential job functions and estimate the percentage of time spent on each. Total must equal 100%. 15% Working with key stakeholders to create a well architected design across Retail systems 50% Design, develop, and deploy backend services with a focus on high availability, low latency, and scalability using Rust. Build and maintain APIs for our client/ store facing applications. Work with serverless technologies within the AWS ecosystem to create efficient and cost-effective software solutions. Optimize applications for maximum speed and scalability. Participate in code reviews, documentation, whiteboard discussions, stand-ups, and pair-programming sessions. 15% Ensure code quality, organization, and automation through comprehensive testing and code review practices. Troubleshoot, debug, and upgrade existing systems. 20% Stay up to date with current best practices in Rust programming, serverless architectures, and AWS services. Provide technical guidance and mentorship to other team members, drive best practices. Help drive continuous improvement in the team Qualification: Bachelor s degree in computer science, Engineering or a related field, or equivalent experience. Proven work experience as a Rust developer (5+ years preferred). Strong understanding of AWS services and serverless architecture. Prior experience with AWS Lambda, API Gateway, DynamoDB, RDS, Event Bridges and other AWS services. Experience with Terraform, Gitlab, Swagger Hub. Experience designing and developing RESTful APIs. Familiar with continuous integration and continuous deployment (CI/CD) workflows. Experience with software development best practices, including testing, documentation, and code reviews. Knowledge of databases (SQL and NoSQL) and data storage solutions. Excellent problem-solving and communication skills. Ability to work independently as well as collaboratively within a team. Passion for learning new technologies and practices. Experience working with Agile Scrum teams and Jira. Additional Preferences: Prior Experience in development of Ecommerce and Retail Store projects. Experience with RudderStack, Shipyard. Experience with other Cloud platforms like Microsoft Azure etc. Familiarity with front-end technologies (JavaScript, HTML5, CSS3, etc.) and frameworks (React, Angular etc.). Experience with Docker and Kubernetes. Contributions to open-source projects.

Posted 2 months ago

Apply

2.0 - 5.0 years

8 - 15 Lacs

Pune

Work from Office

We're Hiring: Data Engineer | 25 Years Experience | AWS + Real-time Focus Join our fast-moving team as a Data Engineer where you'll build scalable, real-time data pipelines, own cloud infrastructure, and collaborate across teams to drive data-first decisions. If you're strong in Python , experienced with streaming platforms like Kafka/Kinesis, and have shipped cloud-native data pipelines (preferably AWS) — we want to hear from you. Must-Haves: 2–5 years of experience in Data Engineering Python (Pandas, PySpark, async), SQL, ETL/ELT Streaming experience (Kafka/Kinesis) AWS cloud stack (Glue, Lambda, S3, Athena) Experience in APIs, data warehousing, and data modelling Bonus if you know: Docker, Kubernetes, Airflow/dbt, or have a background in MLOps. Role & responsibilities Preferred candidate profile

Posted 2 months ago

Apply

5.0 - 7.0 years

12 - 18 Lacs

Mumbai, Delhi / NCR, Bengaluru

Work from Office

We are hiring an experienced Integration Engineer with deep expertise in Dell Boomi and proven skills in Python, AWS, and automation frameworks. This role focuses on building and maintaining robust integration pipelines between enterprise systems like Salesforce, Snowflake, and EDI platforms, enabling seamless data flow and test automation. Key Responsibilities: Design, develop, and maintain integration workflows using Dell Boomi. Build and enhance backend utilities and services using Python to support Boomi integrations. Integrate test frameworks with AWS services such as Lambda, API Gateway, CloudWatch, etc. Develop utilities for EDI document automation (e.g., generating and validating EDI 850 purchase orders). Perform data syncing and transformation between systems like Salesforce, Boomi, and Snowflake. Automate post-test data cleanup and validation within Salesforce using Boomi and Python. Implement infrastructure-as-code using Terraform to manage cloud resources. Create and execute API tests using Postman, and automate test cases using Cucumber and Gherkin. Integrate test results into Jira and X-Ray for traceability and reporting. Must-Have Qualifications: 5 to 7 years of professional experience in software or integration development. Strong hands-on experience with Dell Boomi (Atoms, Integration Processes, Connectors, APIs). Solid programming experience with Python. Experience working with AWS services: Lambda, API Gateway, CloudWatch, S3, etc. Working knowledge of Terraform for cloud infrastructure automation. Familiarity with SQL and modern data platforms (e.g., Snowflake). Experience working with Salesforce and writing SOQL queries. Understanding of EDI document standards and related integration use cases. Test automation experience using Cucumber, Gherkin, Postman. Integration of QA/test reports with Jira, X-Ray, or similar platforms. Familiarity with CI/CD tools like GitHub Actions, Jenkins, or similar. Tools & Technologies: Integration: Dell Boomi, REST/SOAP APIs Languages: Python, SQL Cloud: AWS (Lambda, API Gateway, CloudWatch, S3) Infrastructure: Terraform Data Platforms: Snowflake, Salesforce Automation & Testing: Cucumber, Gherkin, Postman DevOps: Git, GitHub Actions Tracking/Reporting: Jira, X-Ray Location-Remote, Delhi NCR, Bangalore, Chennai, Pune, Kolkata, Ahmedabad, Mumbai, Hyderabad

Posted 2 months ago

Apply

5.0 - 9.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Must have skills: Hands on experience in design and development of Windows based applications using C#, .Net Core, WPF and WCF, Java, Python and React.js /Angular. Understanding of Apigee /API Gateway concepts for managing and securing APIs. Experience with SignalR for real-time communication in.NET applications. Working knowledge of Kubernetes for container orchestration and managing microservices deployments. Outstanding analytical and problem-solving capabilities. Documentation and Review of High-level and detailed design including Component diagrams and sequence diagrams. Sound knowledge in data structures, OOAD & Design patterns Hands on experience in Multithreading, Synchronization, and IPC. Sound in design thinking and architectural level approach to the problems. Good analytical capability, sound reasoning and logic - demonstrated in code optimization, ability to debug multi-threaded applications. Good knowledge of programming tools, debugging tools and techniques, SCM tools and practices. Good Knowledge in Unit Testing frameworks like NUnit is required. Sound knowledge of SDLC processes and demonstrated experience on complete end to end product design and roll-out. Having worked in development as well as maintenance projects with strict adherence to SLA norms. Actively participated in review processes and provided meaningful feedback - at all phases of SDLC. Good Knowledge of Agile methodology and processes. Strong communication and presentation skills are a must. Having Customer interfacing experience would be helpful. High motivation, self-starter, and ability to take others along would be needed on the Job. Nice to have : Triage the software problems and determine the root cause for the issues report from production environment. Design and implementation of Electronic Fare Payment, distribution, and processing systems which includes system hardware and software as well as back-office servers. Designed and Development of AI based applications Cloud Computing Platforms such as Azure or Amazon Web Services (AWS), Hands on experience in UART, MFC, Socket Programming, Object Oriented C, and Windows Driver Development Framework Exposure to C++, STL and Pyhton would be an added advantage. WinCE (5.0/6.0/EC) would be a definite plus. Exposure to Embedded and/or High availability systems, application development for the same, would be a definite plus. Interfacing between C++ & C# (native & managed code); knowledge of IDL/COM/CORBA would be highly desirable. Hands on experience in working devices and device drivers, writings apps that interact with these external devices, serial port communication

Posted 2 months ago

Apply

8.0 - 13.0 years

15 - 30 Lacs

Hyderabad

Remote

Job Position (Title) Senior Engineer - Azure Integration and API Gateway Management Experience Required 8+ Years Technical Skill Requirements C#, Azure Integration Services(AIS), including Logic Apps, Logic app Connectors, Azure Functions, Service Bus, Event Grid. Experience in hosting Apis in Azure APIM, Experience in building CI/CD pipelines Role and Responsibilities API Gateway Management: Design, develop, and manage RESTFUL API gateways using Azure API Management services. Integration Development: Implement integration solutions using .NET, Python, and PowerShell. Infrastructure as Code: Utilize Azure CLI, Bicep, and ARM templates for infrastructure deployment and management. Security Implementation: Implement security protocols including OAUTH, OpenID Connect, HTTPS, and TLS to ensure secure API communications. Also, Implement role-based access control (RBAC) for developers and consumers. Performance Optimization: Enforce policies for rate limiting, throttling, and IP whitelisting to prevent abuse and manage traffic effectively. CI/CD Pipelines: Deploy APIs using CI/CD pipelines integrated with automated monitoring and alert systems. Testing and Quality Assurance: Implement automated testing protocols to ensure consistent performance and reliability. Collaboration: Work closely with cross-functional teams to define requirements, design solutions, and ensure successful implementation. Maintenance and Updates: Create a roadmap for continuous API optimization and updates that aligns with vendor changes and business needs. Troubleshooting: Identify and resolve issues related to API gateway management and integration Required Skills 8 years of development experience in C# Minimum 5 years of experience in Designing and developing Azure Integration Services (AIS) including Logic Apps, Logic app Connectors, Azure Functions, Service Bus, Event Grid Experience in hosting Apis in Azure APIM Experience in building CI/CD pipelines Understanding of Enterprise Integration patterns and application of those design patterns based on application integration scenarios. Testing using Postman Understand the technical scope of the work from the BSA’s or business SME’s. Build the technical specification documentation and get it approved. Build the code as per the applicable Lam standards per DevOps and Security guidelines. Get the code reviewed from the Technical architects. Execute work with minimum supervision

Posted 2 months ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Hyderabad

Work from Office

What you will do In this vital role you will be responsible for business process expertise to detail product requirements as epics and user stories, along with supporting artifacts like business process maps, use cases, and test plans for the software development teams. This role involves working closely with Veeva - Site Collaboration and Veeva Vault Study Training business partners, Veeva engineers, data engineers, AI/ML engineers to ensure that the technical requirements for upcoming development are thoroughly elaborated. This enables the delivery team to estimate, plan, and commit to delivery with high confidence and identify test cases and scenarios to ensure the quality and performance of IT Systems. You will analyze business requirements and design information systems solutions. You will collaborate with multi-functional teams to understand business needs, identify system enhancements, and drive system implementation projects. Your solid experience in business analysis, system design, and project management will enable you to deliver innovative and effective technology products. You will collaborate with Product owners and developers to maintain an efficient and consistent process, ensuring quality work from the team. Roles & Responsibilities: Collaborate with System Architects and Product Owners to manage business analysis activities for Veeva - Site Collaboration and Veeva Vault Study Training systems, ensuring alignment with engineering and product goals. Capture the voice of the customer to define business processes and product needs. Collaborate with Veeva - Site Collaboration and Veeva Vault Study Training business partners, Amgen Engineering teams and Veeva consultants to prioritize release scopes and refine the Product backlog . Support the implementation and integrations of Veeva - Site Collaboration and Veeva Vault Study Training systems with other Amgen systems. Ensure non-functional requirements are included and prioritized in the Product and Release Backlogs. Facilitate the breakdown of Epics into Features and Sprint-Sized User Stories and participate in backlog reviews with the development team. Clearly express features in User Stories/requirements so all team members and collaborators understand how they fit into the product backlog . Ensure Acceptance Criteria and Definition of Done are well-defined. Stay focused on software development to ensure it meets requirements, providing proactive feedback to customers. Develop and implement effective product demonstrations for internal and external partners . Help develop and maintain a product roadmap that clearly outlines the planned features and enhancements, timelines, and achievements. Identify and manage risks associated with the systems, requirement validation, and user acceptance. Develop & maintain documentations of configurations, processes, changes, communication plans and training plans for end users. Ensure operational excellence, cybersecurity, and compliance. Collaborate with geographically dispersed teams, including those in the US and other international locations. Foster a culture of collaboration, innovation, and continuous improvement . Basic Qualifications: Masters degree with 4 - 6 years of experience in Computer Science/Information Systems experience with Agile Software Development methodologies OR Bachelors degree with 6 - 8 years of experience in Computer Science/Information Systems experience with Agile Software Development methodologies OR Diploma with 10 - 12 years of experience in Computer Science/Information Systems experience with Agile Software Development methodologies Preferred Qualifications: Experience with Agile software development methodologies (Scrum). Good communication skills and the ability to collaborate with senior leadership with confidence and clarity. Strong knowledge of clinical trial processes especially Site Collaboration and Study Training process. Familiarity with regulatory requirements for Clinical Trials (e.g. 21 CFR Part11, ICH ). Has experience with writing user requirements and acceptance criteria in agile project management systems such as JIRA. Good-to-Have Skills: Familiarity with Veeva Clinical Platform, especially Veeva - Site Collaboration and Veeva Vault Study Training systems . Experience in managing product features for PI planning and developing product roadmaps and user journeys. Experience maintaining SaaS (software as a system) solutions and COTS (Commercial off the shelf) solutions. Technical thought leadership. Able to communicate technical or complex subject matters in business terms. Jira Align experience. Experience with AWS Services (like EC2, S3), Salesforce, Jira, and API gateway, etc. SAFe for Teams certification (preferred). Certifications in Veeva products (Preferred). Certified Business Analysis Professional (Preferred). Soft Skills: Able to work under minimal supervision . Skilled in providing oversight and mentoring team members. Demonstrated ability in effectively delegating work. Excellent analytical and gap/fit assessment skills. Strong verbal and written communication skills. Ability to work effectively with global, virtual teams . High degree of initiative and self-motivation. Ability to manage multiple priorities successfully. Team-oriented, with a focus on achieving team goals. Strong presentation and public speaking skills.

Posted 2 months ago

Apply

6.0 - 11.0 years

25 - 40 Lacs

Bengaluru

Work from Office

Hi, Greetings from Thales India Pvt Ltd.....! We are hiring for Senir Engineer/Technical Lead - Devops Engineer for our Engineering competency center for Bangalore location . Experience: 6 to 12 years. Notice Period: Immediate to Max 30 Days. About Thales: Thales people architect identity management and data protection solutions at the heart of digital security. Business and governments rely on us to bring trust to the billons of digital interactions they have with people. Our technologies and services help banks exchange funds, people cross borders, energy become smarter and much more. More than 30,000 organizations already rely on us to verify the identities of people and things, grant access to digital services, analyze vast quantities of information and encrypt data to make the connected world more secure. Present in India since 1953, Thales is headquartered in Noida, Uttar Pradesh, and has operational offices and sites spread across Bengaluru, Delhi, Gurugram, Hyderabad, Mumbai, Pune among others. Over 1800 employees are working with Thales and its joint ventures in India. Since the beginning, Thales has been playing an essential role in Indias growth story by sharing its technologies and expertise in Defense, Transport, Aerospace and Digital Identity and Security markets. Additional: Imperva, a Thales Company is a cybersecurity leader Together, we provide innovative platforms designed to reduce the complexity and risks of managing and protecting more applications, data, and identities than any other company can. Our solutions enable over 35,000 organizations to deliver trusted digital services to billions of consumers around the world every day. JOB Summary: We're building a first-of-its-kind AI Firewall to protect applications using Large Language Models (LLMs). As one of the first DevOps Engineers on the team, you'll build and maintain the CI/CD pipelines, observability stack, and deployment infrastructure for a cutting-edge AI Firewall. Your work ensures our services are secure, fast, and always available. Job Knowledge, Skill and Qualifications: BE, M.Sc. in Computer Science or equivalent 6+ years of experience in DevOps, SRE, or Infrastructure Engineering Proficient with Kubernetes, Docker, and cloud platforms (AWS/GCP/Azure) Experience in developing performance-oriented applications. Strong scripting skills (Bash, Python, or Groovy) Background in AI/ML, Networking concepts such as TCP/UDP, HTTP, TLS etc. Bonus: Experience with security tooling, API gateways, or LLM-related infrastructure

Posted 2 months ago

Apply

9.0 - 14.0 years

15 - 30 Lacs

Bengaluru

Work from Office

Qualifications/Skill Sets: Experience 8+ years of experience in software engineering with at least 3+ years as a Staff Engineer or Technical Lead level. Architecture Expertise: Proven track record designing and building large-scale, multi-tenant SaaS applications on cloud platforms (e.g., AWS, Azure, GCP). Tech Stack: Expertise in modern backend languages (e.g., Java, Python, Go, Node.js), frontend frameworks (e.g., React, Angular), and database systems (e.g., PostgreSQL, MySQL, NoSQL). Cloud & Infrastructure: Strong knowledge of containerization (Docker, Kubernetes), serverless architectures, CI/CD pipelines, and infrastructure-as-code (e.g., Terraform, CloudFormation). End to end development and deployment experience in cloud applications Distributed Systems: Deep understanding of event-driven architecture, message queues (e.g., Kafka, RabbitMQ), and microservices. Security: Strong focus on secure coding practices and familiarity with identity management (OAuth2, SAML) and data encryption. Communication: Excellent verbal and written communication skills with the ability to present complex technical ideas to stakeholders. Problem Solving: Strong analytical mindset and a proactive approach to identifying and solving system bottlenecks.

Posted 2 months ago

Apply

6.0 - 11.0 years

1 - 5 Lacs

Chennai

Work from Office

Role Summary: Provides ongoing systems administration support including installation, customization, maintenance and troubleshooting of hardware / software systems. Provides technical support and advises on the use of programming tools, database systems and networks. Provides support to address the availability and reliability issues on systems (Windows/Unix/Mainframe) across multiple locations. Evaluates and integrates new operating system versions, drivers and hardware. Operational responsibilities include remediation of daily incident tickets, system compliance responsibilities, system run enhancement testing and staging, policy / procedure enhancements and adherence, client contact coordination and operational recommendations. Monitors and tunes the system to achieve optimum performance levels in standalone and multi-tiered environments. Implements appropriate levels of system security. Prescribes system backup / disaster recovery procedures and directs recovery operations in the event of destruction of all or part of the operating system or other system components. Ensures 24x7 after-hour support. Responsibilities : Researches, evaluates, and recommends software packages in support of system architecture needs. Defines specifications and requirements for software package modification and customizations. Plans, coordinates, and manages installation, maintenance, and modification of software packages. Participates in software package performance, troubleshooting, and problem resolution. Provides coordination with software vendors. Provide requirements and advises for software packages to end users, administrators and technical support personnel for hardware and network design, documentation, troubleshooting, and technical training. Participates in establishing departmental policy with regard to data definition, data relationships, database design, database implementation, database operation, database security, and data accessibility. Perform database planning, administration, data standards, database security, and database documentation for software packages. Reviews the feasibility and advisability of proposed additions and modifications to the database. Install and customize software and hardware in order to manage, monitor, and otherwise support an enterprise system. Performs monitoring of network, hardware, and storage capacity, through the implementation of an inventory management system. Designs and implements integrations of software packages. Consults with software vendors to evaluate software and hardware for enterprise network management. Defines and manages the configuration of data on network software and hardware components. Monitor all attached devices in a complex LAN environment, such as workstations, servers, bridges, and multi-station access units, including analyzing performance, diagnosing performance problems, and performing load balancing. Understands large scale multi-tenant software products supporting multiple government agencies. Understands large scale software integrations of multiple software products. Required Skills > 6 years of designing application architectures for state and/or federal agencies required, > 6 years of designing application architectures that include incorporating industry standards such as MITA 3.5, HIPAA, NIST, and other applicable standards required Excellent knowledge of systems software / hardware, networks and operating systems. Exceptional knowledge of processes and tools utilized for system management, problem reporting, change management and support tools. Must have knowledge of one or more of the following productsIBM Decision Center, IBM Decision Server, Software AG webMethods, Broadcom/Software AG API Gateway. Preferred Skills Undergraduate degree Experience supporting on-prem data center and cloud for State and/or Federal agencies. Preferred knowledge of one or more of the following productsDell Nutanix, Dell VxRail, VMware ESXi/vCenter/NSX/SRM, Microsoft Windows Server, RedHat Enterprise Linux, MS SQL Server, Nagios, NewRelic APM/Infrastructure/Browser, Octopus Deploy, Puppet, Splunk, Veracode.

Posted 2 months ago

Apply

5.0 - 10.0 years

6 - 11 Lacs

Bengaluru

Work from Office

Req ID: 306669 We are currently seeking a Lead Data Engineer to join our team in Bangalore, Karntaka (IN-KA), India (IN). Position Overview We are seeking a highly skilled and experienced Lead Data/Product Engineer to join our dynamic team. The ideal candidate will have a strong background in streaming services and AWS cloud technology, leading teams and directing engineering workloads. This is an opportunity to work on the core systems supporting multiple secondary teams, so a history in software engineering and interface design would be an advantage. Key Responsibilities Lead and direct a small team of engineers engaged in - Engineering reuseable assets for the later build of data products - Building foundational integrations with Kafka, Confluent Cloud and AWS - Integrating with a large number of upstream and downstream technologies - Providing best in class documentation for downstream teams to develop, test and run data products built using our tools - Testing our tooling, and providing a framework for downstream teams to test their utilisation of our products - Helping to deliver CI, CD and IaC for both our own tooling, and as templates for downstream teams Required Skills and Qualifications - Bachelor's degree in Computer Science, Engineering, or related field - 5+ years of experience in data engineering - 3+ years of experience with real time (or near real time) streaming systems - 2+ years of experience leading a team of data engineers - A willingness to independently learn a high number of new technologies and to lead a team in learning new technologies - Experience in AWS cloud services, particularly Lambda, SNS, S3, and EKS, API Gateway - Strong experience with Python - Strong experience in Kafka - Excellent understanding of data streaming architectures and best practices - Strong problem-solving skills and ability to think critically - Excellent communication skills to convey complex technical concepts both directly and through documentation - Strong use of version control and proven ability to govern a team in the best practice use of version control - Strong understanding of Agile and proven ability to govern a team in the best practice use of Agile methodologies Preferred Skills and Qualifications - An understanding of cloud networking patterns and practises - Experience with working on a library or other long term product - Knowledge of the Flink ecosystem - Experience with terraform - Experience with CI pipelines - Ability to code in a JVM language - Understanding of GDPR and the correct handling of PII - Knowledge of technical interface design - Basic use of Docker

Posted 2 months ago

Apply

8.0 - 12.0 years

10 - 14 Lacs

Gurugram

Work from Office

About The Role : AWS Cloud Engineer Required Skills and Qualifications: 4-7 years of hands-on experience with AWS services, including EC2, S3, Lambda, ECS, EKS, and RDS/DynamoDB, API Gateway. Strong working knowledge of Python, JavaScript. Strong experience with Terraform for infrastructure as code. Expertise in defining and managing IAM roles, policies, and configurations . Experience with networking, security, and monitoring within AWS environments. Experience with containerization technologies such as Docker and orchestration tools like Kubernetes (EKS) . Strong analytical, troubleshooting, and problem-solving skills. Experience with AI/ML technologies and Services like Textract will be preferred. AWS Certifications ( AWS Developer, Machine Learning - Specialty ) are a plus. Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback2Self- ManagementProductivity, efficiency, absenteeism, Training Hours, No of technical training completed

Posted 2 months ago

Apply

3.0 - 7.0 years

5 - 9 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Locations- Pune/Bangalore/Hyderabad/Ahmedabad/Indore Job Responsibilities 1.Design and deploy scalable, highly available, secure, and fault tolerant systems on AWS for the development and test lifecycle of our cloud security product. 2.Focus on building Dockerized application components by integrating with AWS EKS. 3.Modify existing Application in AWS to improve performance. 4.Passion for solving challenging issues. 5.Promote cooperation and commitment within a team to achieve common goals. 6.Examine data to grasp issues, draw conclusions, and solve problems. Must Have Skills 1.Demonstrated competency with the following AWS servicesEKS, AppStream, Cognito, CloudWatch, Fargate Cluster, EC2, EBS, S3, Glacier, RDS, VPC, Route53, ELB, IAM, CloudFront, CloudFormation, SQS, SES, Lambda, APIGateway, 2.Knowledge in Containerization Hosting Technologies like Docker and Kubernetes is highly desirable 3.Experienced with ECS and EKS Managed node clusters, and Fargate 4.Proficient knowledge in scripting (Linux, Unix shell scripts, Python, Ruby, etc.) 5.Hands-on experience in Configuration Management and Deployment tools (CloudFormation, Terraform etc.) 6.Mastery of CI/CD tools (Jenkins, etc.) 7.Building CI/CD pipelines & competency in GIT Good working exposure with Jenkins and GitLab (GitHub, GitLab, Bitbucket) 8.Experience with DevOps Services of cloud vendors (AWS/Azure/GCP, etc.) is necessary. 9.Must be from Development Background 10.Exposure to application and infrastructure monitoring tools (Grafana, Prometheus, Nagios, etc.) outstanding skill to have! 11.Excellent soft skills for IT professionals 12.Aware with AWS IAM policies and basic guidelines 13.Sufficient understanding of AWS network components Good to Have 1.Experience in integrating SCM, Code Quality, Code Coverage, and Testing tools for CI/CD pipelines. 2.Developer background, worked with several code analysis tools, integrations like Sonarqure, Fortify etc, 3.Understands well about the static code analysis

Posted 2 months ago

Apply

7.0 - 9.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Role & responsibilities Implement and manage AWS Cloud Security Services such as WAF (Web Application Firewall) , API Gateway , AWS Control Tower , Security Hub , and AWS Trusted Advisor to ensure security compliance and protection against threats. Design, configure, and monitor cloud networking components such as VPN , VPC , subnets , route tables , and other related networking services to maintain secure, efficient, and scalable network architectures. Enforce security best practices and standards within AWS environments, ensuring all deployments follow the industry's leading security practices for cloud infrastructure. Implement and manage security controls around cloud services such as Container Scanning and Software Composition Analysis (SCA) to ensure vulnerability management and risk mitigation. Continuously monitor and assess AWS environments for security risks, and implement strategies to proactively address vulnerabilities, misconfigurations, and potential threats. Lead and support the implementation of Identity and Access Management (IAM) policies, roles, and permissions to protect cloud resources and data. Integrate security automation tools into CI/CD pipelines to enforce security controls during development and deployment. Work closely with internal teams to advise on security aspects related to cloud networking, cloud-native services, and security configuration. Preferred candidate profile Minimum of 5+ years of experience in AWS Cloud Security, with a focus on securing cloud infrastructure and services. Deep expertise in AWS Security Services , including WAF (Web Application Firewall) , API Gateway , AWS Control Tower , AWS Security Hub , and AWS Trusted Advisor . Strong understanding and hands-on experience with cloud networking components, such as VPN , VPC , subnets , route tables , and network security within AWS.

Posted 2 months ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Indore, Pune, Ahmedabad

Work from Office

Work location-Pune, Ahmedabad, Indore Responsibilities Execution and refinement of cloud infrastructure automation strategy, architecture, and standards. This includes driving best practices and continuous improvement activities for the Intelligent Automation (IA) program with the following goals: Enable Hyper automation Architecture, integrating multiple Intelligent Automation capabilities, such asRPA, NLP, OCR, Process Mining, Task Mining, and Conversational AI Reduce time for cloud infrastructure service provisioning Raise user satisfaction with self-service functions Improve service uptime due to automation of manual, error-prone, changes Minimize hand offs between different teams which otherwise would be manual Maximize developer and app-owner flexibility in consuming cloud services. Maximize re-use and maintainability of automation assets Evaluate Intelligent Automation adjacent technologies, providing input to the Intelligent Automation program roadmap Coordinate the identification of automation opportunities, define approaches and solution architecture Lead efforts to codify, deploy and operate assets for prioritized automation projects. Primary Automation Focus Areas: Cloud Management IT Process Automation Server Life Cycle Automation Continuous Configuration Automation Application Release Automation Continuous Integration and Deployment Storage Automation Network Automation Hardware Provisioning Automation Experience and Skills: Demonstrated experience in leading technology projects preferredDemonstrated ability for rapid understanding of business strategies, and their implications for IT strategyExcellent Oral and Written communication skillsAbility to handle a large volume of multi-faceted activities in a quality mannerConsistent track record of building and maintaining relationships with various leadership levels both internal and external to the IT organizationProven track record of working in a strategic advisory role to senior IT and business leadersExperience in defining IT strategy, standards, and architecture particularly with automation, public cloud services and IT infrastructureBackground in architecting and designing complex customer solutions in a rapidly evolving technology domain4+ years of experience as a solutions architect, enterprise architect, or in an architect consulting delivery role with a solution sales mentality2+ years focused on cloud environments and their supporting infrastructuresDemonstrated ability to work with distributed cross-functional teams to achieve success on behalf of customersAutomation and integration technologiesTooling such as Jenkins, Terraform, Ansible/Tower, Puppet/Chef, vRA/vRO, other orche strators, configuration management frameworks, etc.Development languages JavaScript, Python, Bash, Java, etc.API development, JSON, micro-service architecture, API Gateways, SOA frameworks, ESBs, ETL pipe-lines, etcIT infrastructureAWS, Azure, Kubernetes, OpenStack, OpenShift, VMware vRealize SuiteSoftware defined networking, storage systems architecture, virtualization, containers, serverless technologies, big-data and ML platformsFamiliarity with UiPath and MSFT Power PlatformFamiliarity with security fundamentals at the infrastructure, including cloud and hypervisor, and operating system level

Posted 2 months ago

Apply

6.0 - 10.0 years

0 - 1 Lacs

Chennai

Work from Office

Microservices Developer (Java, Spring Boot, AWS ROSA) Design, develop, and deploy Java Spring Boot-based microservices on AWS ROSA platform Leverage Kong API Gateway and KPI Management Platform for API governance, observability, and traffic control

Posted 2 months ago

Apply

4.0 - 8.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Experience in Modernizing applications to Container based platform using EKS, ECS, Fargate Proven experience on using DevOps tools during Modernization. Solid experience around No-SQL database. Should have used Orchestration engine like Kubernetes, Mesos Java8, spring boot, sql, Postgres DB and AWS Secondary Skills: React, redux , JavaScript Experience level knowledge on AWS Deployment Services, AWS beanstalk, AWS tools & SDK, AWS Cloud9, AWS CodeStar, AWS Command line interface etc and hands on experience on AWS ECS, AWS ECR, AWS EKS, AWS Fargate, AWS Lambda function, Elastic Chache, S3 objects, API Gateway, AWS Cloud Watch and AWS SNS. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Creative problem-solving skills and superb communication Skill. Should have worked on at least 3 engagements modernizing client applications to Container based solutions. Should be expert in any of the programming languages like Java, .NET, Node .js, Python, Ruby, Angular .js Preferred technical and professional experience Experience in distributed/scalable systems Knowledge of standard tools for optimizing and testing code Knowledge/Experience of Development/Build/Deploy/Test life cycle

Posted 2 months ago

Apply

4.0 - 8.0 years

9 - 13 Lacs

Bengaluru

Work from Office

Experience in Modernizing applications to Container based platform using EKS, ECS, Fargate Proven experience on using DevOps tools during Modernization. Solid experience around No-SQL database. Should have used Orchestration engine like Kubernetes, Mesos Java8, spring boot, sql, Postgres DB and AWS Secondary Skills: React, redux , JavaScript Experience level knowledge on AWS Deployment Services, AWS beanstalk, AWS tools & SDK, AWS Cloud9, AWS CodeStar, AWS Command line interface etc and hands on experience on AWS ECS, AWS ECR, AWS EKS, AWS Fargate, AWS Lambda function, Elastic Chache, S3 objects, API Gateway, AWS Cloud Watch and AWS SNS Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Creative problem-solving skills and superb communication Skill. Should have worked on at least 3 engagements modernizing client applications to Container based solutions. Should be expert in any of the programming languages like Java, .NET, Node .js, Python, Ruby, Angular .js Preferred technical and professional experience Experience in distributed/scalable systems Knowledge of standard tools for optimizing and testing code Knowledge/Experience of Development/Build/Deploy/Test life cycle

Posted 2 months ago

Apply

4.0 - 8.0 years

9 - 13 Lacs

Bengaluru

Work from Office

As a Software Developer you'll participate in many aspects of the software development lifecycle, such as design, code implementation, testing, and support. You will create software that enables your clients' hybrid-cloud and AI journeys. Your primary responsibilities include Comprehensive Feature Development and Issue ResolutionWorking on the end to end feature development and solving challenges faced in the implementation. Stakeholder Collaboration and Issue ResolutionCollaborate with key stakeholders, internal and external, to understand the problems, issues with the product and features and solve the issues as per SLAs defined. Continuous Learning and Technology IntegrationBeing eager to learn new technologies and implementing the same in feature development Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Creative problem-solving skills and superb communication Skill. Container based solutions. Strong experience with Node.js and AWS stack - AWS Lambda, AWS APIGateway, AWS CDK, AWS DynamoDB, AWS SQS. Experience with infrastructure as a code using AWS CDK.Expertise in encryption and decryption techniques for securing APIs, API Authentication and Authorization Primarily more experience is required on Lambda and APIGateway. Candidates having the AWS Certified Cloud Practitioner / AWS Certified Developer Associate certifications will be preferred Preferred technical and professional experience Experience in distributed/scalable systems Knowledge of standard tools for optimizing and testing code Knowledge/Experience of Development/Build/Deploy/Test life cycle

Posted 2 months ago

Apply

12.0 - 17.0 years

12 - 16 Lacs

Mumbai

Work from Office

Experience of 12+ years in Software Engineering and at least 3+ years on API Architecture. Career growth from developer to designing and architecture API platforms. With exposure to application integrations, API Architecture and private Cloud VMWare and knowledge of public Cloud Azure/AWS. Experience with designing and implementing API based platforms and architecture for Similar scale and domain organizations. Knowledge of integration products like IBM IIB, IBM Gateway, API Gateways Exposure to application integrations, API micro services architecture BFSI domain experience preferred.

Posted 2 months ago

Apply

6.0 - 11.0 years

5 - 9 Lacs

Chennai

Work from Office

Full Stack Engineer ReactJs,NodeJs.,Next,Nest,Express. At least 6 years in relevant experience and overall 8-10 years in total for Full Stack Engineer (React/NodeJS). Technical Skills Required Cloud Platform Amazon Web Services(AWS) S3, IAM roles and policies, Lambda, API Gateway, Cognito user pool, Cloudwatch Programming Languages React.Js, Node.Js Databases Postgres SQL, Mongo DB, AWS Dynamo DB Scripting Languages JavaScript, TypeScript, HTML, XML Application Servers Tomcat 6.0 / 7.0, Nginx1.23.2 Framework Next Js, Nest Js, Express Js Version Control Systems Git Lab

Posted 2 months ago

Apply

3.0 - 5.0 years

6 - 10 Lacs

Pune

Work from Office

Responsibilities : - Participate in team prioritization discussions with Product/Business stakeholders. - Estimate and own delivery tasks (design, dev, test, deployment, configuration, documentation) to meet the business requirements. - Automate build, operate, and run aspects of software. - Drive code/design/process trade-off discussions within their team when required. - Report status and manage risks within their primary application/service. - Drive integration of services focusing on customer journey and experience. - Perform demos/acceptance discussions in interacting with Product owners. - Understands operational and engineering experience, actively works to improve experience and metrics in ownership area. - Develop complete understanding of end-to-end technical architecture and dependency systems. Requirements : - Expert with previous experience in .Net Tech Stack API Development, SQL Server DB, Windows Services, Command-Line execution of a .NET Program. - Familiar with secure coding standards (e.g., OWASP, CWE, SEI CERT) and vulnerability management. - Understands the basic engineering principles used in building and running mission critical software capabilities (security, customer experience, testing, operability, simplification, service-oriented architecture). - Able to perform debugging and troubleshooting to analyze core, heap, thread dumps and remove coding errors. - Understands and implements standard branching (e.g., Gitflow) and peer review practices. - Has skills in test driven and behavior driven development (TDD and BDD) to build just enough code and collaborate on the desired functionality. - Understands internals of operating systems (Windows, Linux) to write interoperable and performant code. - Understands use cases for advanced design patterns (e.g., service-to-worker, MVC, API gateway, intercepting filter, dependency injection, lazy loading, all from the gang of four) to implement efficient code. - Understands and implements Application Programming Interface (API) standards and cataloging to drive API/service adoption and commercialization. - Has skills to author test code with lots of smaller tests followed by few contract tests at service level and fewer journey tests at the integration level (Test Pyramid concept). - Apply tools (e.g., Sonar) and techniques to scan and measure code quality and anti-patterns as part of development activity. - Has skills to collaborate with team and business stakeholders to estimate requirements (e.g., story pointing) and prioritize based on business value. - Has skills to elaborate and estimate non-functional requirements, including security (e.g., data protection, authentication, authorization), regulatory, and performance (SLAs, throughput, transactions per second). - Has skills to orchestrate release workflows and pipelines, and apply standardized pipelines via APIs to achieve CI and CD using industry standard tools (e.g., Jenkins, AWS/Azure pipelines, XL Release, others). - Understands how to build robust tests to minimize defect leakage by performing regression, performance, deployment verification, and release testing. - Has skills to conduct product demos and co-ordinate with product owners to drive product acceptance sign offs.

Posted 2 months ago

Apply

2.0 - 4.0 years

4 - 8 Lacs

Bengaluru

Work from Office

We are seeking a highly skilled and motivatedKubernetes Engineer with 2-4 years of experience to join our dynamic team. Theideal candidate will have a strong background in Kubernetes, Ansible, Docker,API Gateways, Ingress Controllers, Load Balancers, and Service Mesh. This roleinvolves troubleshooting issues, managing end-to-end deployments, and ensuringseamless operation both on-premises and in the cloud. Key Responsibilities : Kubernetes Management : Deploy, manage, and troubleshoot Kubernetes clusters. Implement and manage Kubernetes Ingress Controllers for routing external traffic to services within the cluster. Configuration Management : Utilize Ansible for configuration management, automation, and orchestration tasks. Develop and maintain Ansible playbooks to automate deployment processes and system configurations. Containerization : Build, deploy, and manage Docker containers. Optimize containerized applications for performance and scalability. API Gateways : Configure and manage API Gateways to facilitate communication between microservices. Ensure secure and efficient API management and traffic control. Cloud and On-Premises Deployment : Execute end-to-end deployment processes for applications and services both on-premises and in cloud environments (e.g., AWS, Azure, GCP). Ensure seamless integration and operation across different deployment platforms. Load Balancers : Implement and manage load balancing solutions to ensure high availability and scalability of services. Configure and maintain load balancers to distribute traffic efficiently across servers. Service Mesh : Deploy and manage service mesh solutions (e.g., Istio, Linkerd) to enhance service-to-service communication, security, and observability. Integrate service mesh with existing infrastructure and applications. Troubleshooting and Support : Identify, analyze, and resolve complex issues in Kubernetes clusters, containerized applications, and network configurations. Provide support and troubleshooting assistance to development and operations teams. Continuous Improvement : Stay updated with the latest trends and technologies in Kubernetes, containerization, and cloud computing. Implement best practices and continuous improvement strategies for deployment processes and system management. Qualifications : 2-4 years of hands-on experience with Kubernetes, Ansible, and Docker. Proficient in managing API Gateways, Ingress Controllers, Load Balancers, and Service Mesh. Experience with cloud platforms such as AWS, Azure, or GCP. Strong troubleshooting skills and the ability to handle complex technical issues. Excellent communication and teamwork skills. Relevant certifications (e.g., Certified Kubernetes Administrator, AWS Certified Solutions Architect) are a plus.

Posted 2 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies