Jobs
Interviews

14832 Containerization Jobs - Page 5

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Position Summary: We are seeking a highly skilled Full Stack Developer to join our dynamic development team. In this role, you will be responsible for rapidly developing web-based MVP applications that showcase cutting-edge AI models through interactive web applications containing 3D models and graph visualizations. The ideal candidate will have expertise in React, Node.js, Three.js and Azure managed services; and a passion for developing innovative web applications. Your work will be crucial in demonstrating the potential of our technology to key customers. A Snapshot of your Day How You’ll Make An Impact (responsibilities Of Role) Build and rapidly iterate compelling, high-quality, responsive web applications from concept to customer demo. Experience in full stack web development using Typescript/JavaScript, React, HTML5 and CSS3, Develop robust and scalable back-end services using Node.js or Serverless to serve data and integrate with AI/ML models. Develop interactive frontend visualizations in 2D/3D using modern web technologies. Collaborate closely with AI scientists and domain experts to understand requirements and translate complex data into intuitive user interfaces. Write clean, maintainable, and efficient code following industry best practices. Participate in code reviews and provide constructive feedback to team members. What You Bring Bachelor’s degree in computer science or a related field, or equivalent practical experience. Proven experience building rich, interactive frontends with React and TypeScript. Hands-on experience with a 3D web graphics library (Three.js) Experience building full-stack applications with Next.js, including creating backend logic using API Routes. Experience with cloud-centric development and deployment (Azure); serverless backends (Azure Functions), Azure Storage and other common services. Familiarity with DevOps tools and CI/CD pipelines. Demonstrates enthusiasm, creativity in problem-solving, critical thinking, and effective communication in a distributed team environment. Strong communication skills in English Preferred Qualifications: Experience with 3D modelling (Blender or equivalent) Experience with Nvidia Omniverse, its Python SDK, and the Universal Scene Description (USD) format. Experience interacting with AI/ML models or AI inference endpoints. Familiarity with containerization using Docker. Knowledge of authentication and authorization frameworks such as OAuth, JWT, and OpenID Connect.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Note : Minimum 6 years of Experience is required. Core Characteristics and Soft Skills: Beyond technical proficiency, the right mindset and interpersonal skills are crucial for success on our team. We’d prioritize candidates who demonstrate: Problem-Solving Acumen: The ability to analyze complex problems, break them down, evaluate different approaches, and implement robust, efficient solutions. This includes troubleshooting existing systems and designing new ones. Independence and Initiative: We value engineers who can take ownership of tasks, research potential solutions independently, make informed decisions, and drive work forward with minimal supervision once objectives are clear. Dependability and Accountability: Team members must be reliable, meet commitments, deliver high-quality work, and take responsibility for their contributions. Strong Communication Skills: Clear, concise communication (both written and verbal) is essential. This includes explaining technical concepts to varied audiences, actively listening, providing constructive feedback, and documenting work effectively. Collaboration and Teamwork: Ability to work effectively within a team structure, share knowledge, participate in code reviews, and contribute to a positive team dynamic. Adaptability and Eagerness to Learn: The technology landscape and business needs evolve. We seek individuals who are curious, adaptable, and willing to learn new technologies and methodologies. Core Technical Skillset: Our current technology stack forms the foundation of our work. Proficiency or strong experience in the following areas is highly desirable: Backend Development: Java: Deep understanding of Java (latest LTS versions preferred). Spring Boot: Extensive experience building applications and microservices using the Spring Boot framework and its ecosystem (e.g., Spring Data, Spring Security, Spring Cloud). Messaging Systems: Apache Kafka: Solid understanding of Kafka concepts (topics, producers, consumers, partitioning, brokers) and experience building event-driven systems. Containerization & Orchestration: Kubernetes: Practical experience deploying, managing, and troubleshooting applications on Kubernetes. OCP (OpenShift Container Platform): Experience specifically with OpenShift is a significant advantage. AKS (Azure Kubernetes Service): Experience with AKS is also highly relevant. (General Docker knowledge is expected) CI/CD & DevOps: GitHub Actions: Proven experience in creating, managing, and optimizing CI/CD pipelines using GitHub Actions for build, test, and deployment automation. Understanding of Git branching strategies and DevOps principles. Frontend Development: JavaScript​: Strong proficiency in modern JavaScript (ES6+) .React: Experience building user interfaces with the React library and its common patterns/ecosystem (e.g., state management, hooks) .Database & Data Warehousing :Oracle: Experience with Oracle databases, including writing efficient SQL queries, understanding data modeling, and potentially PL/SQL .Snowflake: Experience with Snowflake cloud data warehouse, including data loading, querying (SQL), and understanding its architecture .Scripting :Python: Proficiency in Python for scripting, automation, data manipulation, or potentially backend API development (e.g., using Flask/Django, though Java/Spring is primary) . Domain Understanding (Transportation & Logistics) :While not strictly mandatory, candidates with experience or a demonstrated understanding of the transportation and logistics industry (e.g., supply chain management, freight operations, warehousing, fleet management, routing optimization, TMS systems) will be able to contribute more quickly and effectively. They can better grasp the business context and user needs . Additional Valuable Skills :We are also interested in candidates who may possess skills in related areas that complement our core activities :Data Science & Analytics :Experience with data analysis techniques .Knowledge of Machine Learning (ML) concepts and algorithms (particularly relevant for optimization, forecasting, anomaly detection in logistics) .Proficiency with Python data science libraries (Pandas, NumPy, Scikit-learn) .Experience with data visualization tools and techniques .Understanding of optimization algorithms (linear programming, vehicle routing problem algorithms, etc.) .Cloud Platforms : Broader experience with cloud services (particularly Azure, but also AWS or GCP) beyond Kubernetes (e.g., managed databases, serverless functions, monitoring services) .Testing : Strong experience with automated testing practices and tools (e.g., JUnit, Mockito, Cypress, Selenium, Postman/Newman) .API Design & Management : Deep understanding of RESTful API design principles, API security (OAuth, JWT), and potentially experience with API gateways .Monitoring & Observability : Experience with tools like Prometheus, Grafana, ELK Stack (Elasticsearch, Logstash, Kibana), Datadog, Dynatrace, etc., for monitoring application health and performance .Security: Awareness and application of secure coding practices (e.g., OWASP Top 10) .

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Join us as a Software Engineer This is an opportunity for a driven Software Engineer to take on an exciting new career challenge Day-to-day, you'll be engineering and maintaining innovative, customer centric, high performance, secure and robust solutions It’s a chance to hone your existing technical skills and advance your career while building a wide network of stakeholders We're offering this role at associate level What you'll do In your new role, you’ll be working within a feature team to engineer software, scripts and tools, as well as liaising with other engineers, architects and business analysts across the platform. You’ll Also Be Producing complex and critical software rapidly and of high quality which adds value to the business Working in permanent teams who are responsible for the full life cycle, from initial development, through enhancement and maintenance to replacement or decommissioning Collaborating to optimise our software engineering capability Designing, producing, testing and implementing our working software solutions Working across the life cycle, from requirements analysis and design, through coding to testing, deployment and operations The skills you'll need To take on this role, you’ll need a background in software engineering, software design, and architecture, and an understanding of how your area of expertise supports our customers. You’ll Also Need Experience of at least 5 years working with Identity Access Management Domain , (OAUTH , Ping Access, Ping Federate) Proven experience with AWS services such as EC2, S3, IAM, Lambda, RDS, CloudFormation, and VPC Expertise in infrastructure as code (IaC) using Terraform or AWS CloudFormation Experience with CI/CD tools and practices, including GitLab CI, Jenkins, or AWS-native solutions Familiarity with containerization and orchestration tools like Docker, Kubernetes, ECS, or EKS Experience with monitoring, logging, and alerting tools (e.g., CloudWatch, ELK Stack, Prometheus, Grafana) Excellent communication and collaboration skills, with the ability to work effectively with teams across geographies

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Summary: We are looking for a skilled Java Software Engineer to join our development team. The ideal candidate will have experience designing, developing, and maintaining scalable Java applications. You should be comfortable with both back-end and full-stack development, working in a collaborative environment to build high-performance, secure, and user-friendly solutions. Key Responsibilities: Design, develop, and maintain robust Java-based applications. Write clean, maintainable, and efficient code following best practices. Collaborate with cross-functional teams including product managers, QA, and DevOps. Troubleshoot and resolve technical issues in development, testing, and production environments. Participate in code reviews and contribute to team knowledge sharing. Develop and integrate APIs and external services. Optimize application performance and scalability. Ensure security and data protection compliance. Maintain documentation for software development and deployment processes. Requirements: Bachelor’s degree in Computer Science, Engineering, or related field. Proven experience as a Java Developer or Java Software Engineer. Strong knowledge of Java (preferably Java 8 or above), OOP, and design patterns. Hands-on experience with frameworks like Spring Boot, Hibernate, or JPA. Familiarity with RESTful APIs and microservices architecture. Experience with relational databases (e.g., MySQL, PostgreSQL) and/or NoSQL (e.g., MongoDB). Proficient in version control tools like Git. Basic knowledge of front-end technologies (HTML, CSS, JavaScript) is a plus. Good understanding of Agile methodologies and SDLC. Preferred Qualifications: Experience with cloud platforms like AWS, Azure, or GCP. Knowledge of containerization tools like Docker and Kubernetes. Familiarity with CI/CD pipelines (e.g., Jenkins, GitHub Actions). Experience with unit testing and test-driven development (TDD). Soft Skills: Excellent problem-solving skills. Strong communication and teamwork abilities. Eagerness to learn and adapt to new technologies. Attention to detail and a passion for quality.

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will: Implement and maintain Continuous Integration/Continuous Deployment (CICD) pipelines using tools like Jenkins. Support and maintain various DevOps tools, including Jenkins, Terraform and GCP Products like GCE, GKE, BigQuery, Pubsub, Alerting, Monitoring. Create and manage infrastructure as code using Terraform for efficient and scalable deployments. Collaborate with development and operations teams to ensure smooth integration and deployment processes. Build and manage images using tools like Packer, Docker and implement image rotation strategies. Monitor and respond to alerts and incidents, troubleshooting production issues promptly. Ensure end-to-end infrastructure management, including configuration management, monitoring, and security. Implement and enforce security standards and best practices for the DevOps environment. Provide technical support and guidance to development teams on DevOps processes and tools. Stay up-to-date with the latest DevOps trends and technologies, recommending improvements as needed. Document processes, procedures, and configurations for future reference. Implement and support GitOps practices and should be able to configure repositories to follow best practices like codeowners and webhooks. Requirements To be successful in this role, you should meet the following requirements: Proficient in scripting and automation using languages like Bash, Python, or Groovy. INTERNAL Proven experience as a DevOps Engineer or similar role, with a focus on CICD, Jenkins, Terraform, GCP, and relevant DevOps tools. Familiarity with containerization technologies like Docker and orchestration tools like Kubernetes (GKE or any equivalent) and Helm. Fair knowledge of disaster recovery and backups, with the ability to troubleshoot production issues. The candidate should be familiar with ICE compliance and standards and capable of guiding development teams. Strong understanding of end-to-end infrastructure management and security standards. Experience with image creation and rotation using tools like Packer and Docker. Knowledge of banking industry processes and regulations is a plus. Excellent problem-solving and communication skills. Ability to work independently and in a team, with a flexible and adaptable mindset. Strong attention to detail and ability to prioritize tasks effectively. The candidate having experience in managing on-prem IKP or VM Unix-based workloads and maintenance would be a plus. Familiarity with connectivity patterns using Jenkins to run various pipelines interacting with Google-managed or unmanaged resources. The candidate should have knowledge of HSBC internal controls related to CI/CD, including tool integration and adoption for PDP. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 1 day ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position Summary: We are seeking a highly skilled Full Stack Developer to join our dynamic development team. In this role, you will be responsible for rapidly developing web-based MVP applications that showcase cutting-edge AI models through interactive web applications containing 3D models and graph visualizations. The ideal candidate will have expertise in React, Node.js, Three.js and Azure managed services; and a passion for developing innovative web applications. Your work will be crucial in demonstrating the potential of our technology to key customers. A Snapshot of your Day How You’ll Make An Impact (responsibilities Of Role) Build and rapidly iterate compelling, high-quality, responsive web applications from concept to customer demo. Experience in full stack web development using Typescript/JavaScript, React, HTML5 and CSS3, Develop robust and scalable back-end services using Node.js or Serverless to serve data and integrate with AI/ML models. Develop interactive frontend visualizations in 2D/3D using modern web technologies. Collaborate closely with AI scientists and domain experts to understand requirements and translate complex data into intuitive user interfaces. Write clean, maintainable, and efficient code following industry best practices. Participate in code reviews and provide constructive feedback to team members. What You Bring Bachelor’s degree in computer science or a related field, or equivalent practical experience. Proven experience building rich, interactive frontends with React and TypeScript. Hands-on experience with a 3D web graphics library (Three.js) Experience building full-stack applications with Next.js, including creating backend logic using API Routes. Experience with cloud-centric development and deployment (Azure); serverless backends (Azure Functions), Azure Storage and other common services. Familiarity with DevOps tools and CI/CD pipelines. Demonstrates enthusiasm, creativity in problem-solving, critical thinking, and effective communication in a distributed team environment. Strong communication skills in English Preferred Qualifications: Experience with 3D modelling (Blender or equivalent) Experience with Nvidia Omniverse, its Python SDK, and the Universal Scene Description (USD) format. Experience interacting with AI/ML models or AI inference endpoints. Familiarity with containerization using Docker. Knowledge of authentication and authorization frameworks such as OAuth, JWT, and OpenID Connect.

Posted 1 day ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : Spring Boot Good to have skills : AWS Architecture, AWS Administration, AWS Application Integration Minimum 5 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve collaborating with various teams to ensure project milestones are met, facilitating discussions to address challenges, and guiding your team through the development process. You will also engage in strategic planning sessions to align project goals with organizational objectives, ensuring that the applications developed meet both user needs and technical requirements. Your role will be pivotal in fostering a collaborative environment that encourages innovation and problem-solving among team members. Roles & Responsibilities: - Expected to be an SME. - Collaborate and manage the team to perform. - Responsible for team decisions. - Engage with multiple teams and contribute on key decisions. - Provide solutions to problems for their immediate team and across multiple teams. - Mentor junior team members to enhance their skills and knowledge. - Facilitate regular team meetings to track progress and address any roadblocks. Professional & Technical Skills: - Must To Have Skills: Proficiency in Spring Boot. - Good To Have Skills: Experience with AWS Architecture, AWS Administration, AWS Application Integration. - Strong understanding of microservices architecture and design patterns. - Experience with RESTful API development and integration. - Familiarity with containerization technologies such as Docker and Kubernetes. Additional Information: - The candidate should have minimum 5 years of experience in Spring Boot. - This position is based in Mumbai. - A 15 years full time education is required.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Position Overview: We are expanding our Technology team and looking for an AI Engineer who can develop and implement AI-powered applications at scale. We seek a pragmatic problem solver with strong backend engineering skills, excellent technical abilities, and experience applying AI concepts and frameworks to build robust applications. You will design, develop, and optimize AI solutions that address complex business challenges for our clients. ShyftLabs is a leading data and AI company, helping enterprises unlock value through AI-driven products and solutions. We specialize in data platforms and AI-powered automation, offering consulting, prototyping, and solution delivery to transform data into actionable insights. Key Responsibilities: Research, design, and develop AI-powered applications and solutions that address complex business challenges Implement and integrate AI models into scalable production environments Collaborate with cross-functional teams to deliver end-to-end AI solutions Optimize AI implementations for performance, reliability, and cost efficiency Apply engineering best practices to ensure AI systems meet quality standards and can scale Stay current with the latest AI tools and frameworks to propose innovative solutions Document technical processes and implementations to improve knowledge sharing Basic & Preferred Qualifications: Bachelor's or Master's degree in Computer Science, Machine Learning, AI, or related field Strong software engineering background with 3+ years of experience building applications at scale Proven experience implementing and deploying AI solutions in production environments Proficient in programming languages such as Python, JavaScript, or Java Experience with modern AI frameworks and tools for building AI applications Solid understanding of software design patterns, API development, and system architecture Knowledge of containerization technologies and cloud platforms for AI development Strong problem-solving skills and ability to translate business requirements into technical solutions We are proud to offer a competitive salary alongside a comprehensive benefits package. We pride ourselves on the growth of our employees, offering extensive learning and development resources to help you advance your career.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Before you apply to a job, select your language preference from the options available at the top right of this page. Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description Job Title: Intermediate Application Developer Experience Range: 3-5 Years Location: Chennai, Hybrid Employment Type: Full-Time About UPS UPS is a global leader in logistics, offering a broad range of solutions that include transportation, distribution, supply chain management, and e-commerce. Founded in 1907, UPS operates in over 220 countries and territories, delivering packages and providing specialized services worldwide. Our mission is to enable commerce by connecting people, places, and businesses, with a strong focus on sustainability and innovation. About Global Integration Center (GIC) GIC is a cluster of middleware platform that supports different patterns for supporting B2B, A2A and customer implementations. About The Role We are seeking an experience Middleware Applications Support role. The person should be familiar with different Middleware patterns has developed and supported transformation and communication protocols. Key Responsibilities Perform troubleshooting of complex technical issues across the middleware application Work with cross functional teams and stakeholders in responding to critical system and application issues Work on production service incidents and close them out to eventual resolution Perform and Support Deployments of mappings and configurations Ensure best practices, timely resolution of production issues and reliability and security of systems. Debug, update and test simple to medium mapping changes to resolve issues impacting business continuity Primary Skills IBM Design Studio, Launcher RDBMS concepts, PL/SQL Linux/Unix scripting and OS knowledge Messaging Protocols: IBM MQ, JMS, AS2, FTP Messaging Formats: ANSI X12, EDIFACT, XML, JSON, Flat Files Secondary Skills Programming language like Java, Python, Perl Editors like XML Spy, TextPad, UltraEdit Qualifications Bachelor’s degree in computer science, Information Technology, or related field. Proven experience of building, deploying and supporting IBM Design Studio Maps Excellent problem-solving skills and the ability to lead technical discussions. Nice To Have Experience or knowledge of Oracle Weblogic or IBM Webmethods Exposure to containerization technologies (Docker, Kubernetes). Soft Skills Strong problem-solving abilities and attention to detail. Excellent communication skills, both verbal and written. Effective time management and organizational capabilities. Ability to work independently and within a collaborative team environment. Strong interpersonal skills to engage with cross-functional teams. About The Team You will be part of a dynamic and collaborative team of developers. Our team values innovation, continuous learning, and agile best practices. Employee Type Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation.

Posted 1 day ago

Apply

14.0 - 20.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Title: Principal Architect Location: Hybrid-Hyderabad/Mumbai/Pune/Bengaluru/Chennai About the Job: We are seeking a seasoned Java Full Stack Enterprise Architect with 14 to 20 years of experienceto lead and drive enterprise-level projects. The ideal candidate will have strong expertise in Java, AWS (Amazon Web Services), Kafka, Docker, Kubernetes, and other cutting-edge technologies. This role requires experience in application transformation, modernization, and containerization initiatives. What you will do: Architectural Leadership Design and implement scalable, resilient, and secure full-stack solutions using Java and modern frameworks. Provide end-to-end architecture guidance for enterprise transformation and modernization projects. Define best practices for application design, development, deployment, and maintenance in a cloud-native environment. Cloud and AWS Solutioning Architect solutions leveraging AWS services (e.g., EC2, S3, Lambda, RDS, DynamoDB). Develop and maintain cloud migration strategies, ensuring high availability and cost optimization. Create detailed documentation, including solution designs and architectural diagrams. Containerization & Orchestration Lead the adoption of Docker and Kubernetes to containerize applications. Oversee the orchestration of microservices in distributed systems to ensure scalability and reliability. Define CI/CD pipelines to automate deployment processes. Data Streaming & Integration Design and implement event-driven architectures using Kafka. Ensure seamless integration across enterprise systems and data pipelines. Transformation & Modernization Drive legacy application modernization to microservices and cloud-native architecture. Assess current technology stack and recommend improvements to align with business goals. Team Collaboration Mentor engineering teams, fostering a culture of innovation and continuous improvement. Collaborate with cross-functional teams, including product managers, developers, and business stakeholders. Who you are: Education & Experience: Bachelor’s degree in Computer Science, Engineering, or a related field (Master’s degree preferred). 14-20 years of experience. Technical Skills: Core Expertise: Java, Spring Boot, RESTful APIs, and front-end technologies (e.g., Angular, React, or Vue.js). Cloud Technologies: Strong experience with AWS services, cloud-native application development, and deployment strategies. Containerization & Orchestration:Proficiency in Docker, Kubernetes, and Helm. Data Streaming: Advanced knowledge of Kafka, including architecture, implementation, and troubleshooting. Modernization: Hands-on experience with application transformation and legacy system modernization projects. Leadership: Proven ability to lead large teams, drive complex projects, and align technical deliverables with business objectives. Preferred Skills: Certifications: AWS Certified Solutions Architect or equivalent certifications. Strong understanding of DevOps practices and tools (e.g., Jenkins, GitHub Actions). Soft Skills: Attention to detail. Dedicated self-starter with excellent people skills. Quick learner and a go-getter. Effective time and project management. Analytical thinker and a great team player. Strong leadership, interpersonal& problem-solving skills. Ability to work in a fast-paced, dynamic environment. English Languageproficiency is required to effectively communicate in a professional environment. Excellent communication skills are a must. Strong problem-solving skills and a creative mindset to bring fresh ideas to the table. Shoulddemonstrateconfidence and self-assurance in their skills and expertise enabling them to contribute to team success and engage with colleagues and clients in a positive, assured manner. Should be accountable and responsible for deliverables and outcomes. Shoulddemonstrateownership of tasks, meet deadlines, and ensure high-quality results. Demonstrates strong collaboration skills by working effectively with cross-functional teams, sharing insights, and contributing to shared goals and solutions. Continuously explore emerging trends, technologies, and industry best practices to drive innovation and maintain a competitive edge.

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

About Toyota Connected: If you want to change the way the world works, transform the automotive industry and positively impact others on a global scale, then Toyota Connected is the right place for you! Within our collaborative, fast-paced environment we focus on continual improvement and work in a highly iterative way to deliver exceptional value in the form of connected products and services that wow and delight our customers and the world around us. Come help us re-imagine what mobility can be today and for years to come! About the Team: Toyota Connected India is looking for Lead Engineer, DevOps. This team is focused on creating infotainment solutions on embedded and cloud platforms. The team members are required to be creative in solving problems, excited to work in new technology areas and be ready to wear multiple hats to get things done. This is a highly energized, fast-paced, innovative and collaborative startup environment; therefore, it is essential that not only the skillset, but also the personality matches such an environment. Responsibilities: Hands-on experience with cloud platforms such as AWS, or Google Cloud Platform. Strong expertise in containerization (e.g., Docker) and Kubernetes for container orchestration. Experience with infrastructure automation and configuration management tools like Terraform, CloudFormation, Ansible, or similar. Proficient in scripting languages such as Python, Bash, or Go. Experience with monitoring and logging solutions such as Prometheus, Grafana, ELK Stack, or Datadog. Knowledge of networking concepts, security best practices, and infrastructure monitoring. Strong experience with CI/CD tools such as Jenkins, GitLab CI, CircleCI, Travis CI, or similar. What’s in it for you? Top-of-the-line compensation! You'll be treated like the professional we know you are and left to manage your own time and workload. Yearly gym membership reimbursement & Free catered lunches. No dress codes! We trust you are responsible enough to choose what’s appropriate to wear for the day. Opportunity to build products that improve the safety and convenience of millions of customers. New cool office space and other awesome benefits! Our Core Values: EPIC Empathetic : We begin making decisions by looking at the world from the perspective of our customers, teammates, and partners. Passionate: We are here to build something great, not just for the money. We are always looking to improve the experience of our millions of customers. Innovative : We experiment with ideas to get to the best solution. Any constraint is a challenge, and we love looking for creative ways to solve them. Collaborative: When it comes to people, we think the whole is greater than its parts and that everyone has a role to play in the success! To know more about us, check out our glass door page - https://www.glassdoor.co.in/Reviews/TOYOTA-Connect

Posted 1 day ago

Apply

7.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Job Summary: Synechron is seeking a highly skilled Full Stack Developer to join our dynamic team. In this role, you will contribute to designing, developing, and optimizing data-driven applications that leverage modern technologies such as Elasticsearch, Angular, Java, and AI/ML APIs. You will play a key part in building scalable, efficient solutions that support our clients’ strategic objectives, ensuring seamless integration and performance. This position offers an opportunity to work across the full technology stack within a collaborative environment committed to continuous innovation. Software Requirements Required Skills: Angular (v10+): Experience in building responsive and user-centric front-end applications. Java (Java 8 or higher): Proficiency in developing scalable backend services and APIs. Elasticsearch (v6+): Strong experience with search implementation and data indexing functionalities. Alteryx: Proven ability to automate workflows and prepare data efficiently in a professional setting. Bedrock & OpenAI APIs: Familiarity with integrating cloud-based AI/ML models for intelligent features. RESTful API Design and Integration: Knowledge of designing and consuming APIs for seamless data flow. Preferred Skills: Cloud Platforms (AWS preferred): Deployment and management of applications in cloud environments. DevOps Tools: CI/CD pipelines, containerization (Docker, Kubernetes). Additional frameworks/libraries such as Redux, RxJS for Angular; Spring Boot for Java are advantageous. Overall Responsibilities Develop and maintain front-end applications with Angular, ensuring high usability and performance. Build and optimize backend services in Java to support scalable data processing and retrieval. Integrate Elasticsearch to facilitate advanced search functionalities and efficient data indexing. Automate complex data workflows using Alteryx to streamline data pipelines. Connect to and leverage Bedrock and OpenAI APIs for AI/ML-enabled features and enhancements. Collaborate with cross-functional teams to deliver high-quality, integrated solutions aligned with strategic goals. Conduct code reviews, testing, and documentation to ensure code quality and maintainability. Stay current with emerging technologies to continuously improve development practices. Technical Skills (By Category) Programming Languages: Required: Java (Java 8+), TypeScript/JavaScript Preferred: Knowledge of Python or other scripting languages for AI integrations Databases/Data Management: Elasticsearch (search/indexing) Familiarity with SQL and NoSQL databases Cloud Technologies: AWS (preferred) for deployment, storage, and cloud services Frameworks and Libraries: Angular (React is a plus) Java-based frameworks such as Spring Boot (preferred) AI/ML APIs: Bedrock, OpenAI Development Tools & Methodologies: Version Control: Git, Bitbucket CI/CD Tools: Jenkins, GitLab CI (preferred) Containerization: Docker, Kubernetes (preferred) Agile/Scrum development practices Security Protocols: Understanding of secure coding and data protection best practices Experience Requirements 7+ years of professional experience in full-stack development roles. Demonstrable experience with Elasticsearch, Angular, Java, and AI/ML API integrations. Proven track record of developing scalable, data-oriented applications. Industry experience in financial services, technology consulting, or enterprise solutions is advantageous. Candidates with a breadth of experience across multiple domains and alternative pathways demonstrating relevant skills are encouraged to apply. Day-to-Day Activities Collaborate with product owners, UX/UI designers, and backend teams to develop new features and enhancements. Write clean, efficient, and well-documented code, performing unit and integration testing. Conduct regular code reviews and participate in team planning sessions. Troubleshoot, debug, and optimize existing applications for performance and reliability. Implement AI/ML features by integrating Bedrock and OpenAI APIs, ensuring ethical and compliant use. Participate in stand-ups, sprint planning, and retrospectives within an Agile environment. Contribute to continuous improvement initiatives, including automation and architectural refinement. Qualifications Bachelor's or Master’s degree in Computer Science, Engineering, or a related field; equivalent professional experience will also be considered. Certifications in cloud platforms (AWS Certified Solutions Architect, etc.) or specific technologies are a plus. Ongoing professional development in emerging technologies such as AI, cloud computing, or DevOps. Professional Competencies Strong problem-solving and analytical capabilities with an emphasis on data-driven decision making. Effective communication skills for engaging with diverse stakeholder groups. Ability to work collaboratively within a team and independently manage tasks. Adaptability to evolving project requirements and emerging technologies. Innovative mindset, with a focus on delivering practical, scalable solutions. Excellent time management skills, prioritizing tasks effectively to meet deadlines. S YNECHRON’S DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative ‘Same Difference’ is committed to fostering an inclusive culture – promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicant’s gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law. Candidate Application Notice

Posted 1 day ago

Apply

2.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Opentext - The Information Company OpenText is a global leader in information management, where innovation, creativity, and collaboration are the key components of our corporate culture. As a member of our team, you will have the opportunity to partner with the most highly regarded companies in the world, tackle complex issues, and contribute to projects that shape the future of digital transformation. AI-First. Future-Driven. Human-Centered. At OpenText, AI is at the heart of everything we do—powering innovation, transforming work, and empowering digital knowledge workers. We're hiring talent that AI can't replace to help us shape the future of information management. Join us. The Impact OpenText is a global leader in data protection and security solutions, serving SMBs and consumers. Our mission is to simplify and strengthen cyber resilience across a connected world. We are seeking a skilled Software Quality Engineer to support the development and maintenance of our Server Backup product line. What The Role Offers Design and execute comprehensive quality control tests to ensure software meets defined standards and user requirements. Collaborate with stakeholders to gather requirements and develop test strategies and scenarios. Create test plans and schedules, and estimate testing efforts. Develop and maintain automated test scripts and environments. Design diagnostic programs, test fixtures, and procedures; document test results. Perform systematic debugging and apply quality standards throughout the product lifecycle. Leverage company provided AI tools to improve productivity of day-to-day task Come up with innovative ideas and discuss with PM to incorporate them into the product What You Need To Succeed Bachelor’s or Master’s degree in Computer Science or related field. Minimum 2 years of experience in enterprise software QA. Proficient in Windows environments and web application testing across browsers. Experienced with REST API testing, manual and automated testing. Skilled in Java or C# automation using Selenium, NUnit/JUnit, SpecFlow, PowerShell. Strong analytical and reporting skills. Solid understanding of QA methodologies and software development lifecycle. Familiarity with tools like Jira and GitLab. Strong problem-solving abilities and attention to detail. One Last Thing Knowledge of HTML, CSS, JavaScript, JSON, and HTTP. Experience with Docker and containerization technologies. Exposure to AWS Cloud Platform. In-depth understanding of Windows OS. OpenText's efforts to build an inclusive work environment go beyond simply complying with applicable laws. Our Employment Equity and Diversity Policy provides direction on maintaining a working environment that is inclusive of everyone, regardless of culture, national origin, race, color, gender, gender identification, sexual orientation, family status, age, veteran status, disability, religion, or other basis protected by applicable laws. If you need assistance and/or a reasonable accommodation due to a disability during the application or recruiting process, please contact us at hr@opentext.com. Our proactive approach fosters collaboration, innovation, and personal growth, enriching OpenText's vibrant workplace.

Posted 1 day ago

Apply

0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Summary Position Summary Job title: CI/CD Pipeline – Senior Consultant About At Deloitte, we do not offer you just a job, but a career in the highly sought-after risk Management field. We are one of the business leaders in the risk market. We work with a vision to make the world more prosperous, trustworthy, and safe. Deloitte’s clients, primarily based outside of India, are large, complex organizations that constantly evolve and innovate to build better products and services. In the process, they encounter various risks and the work we do to help them address these risks is increasingly important to their success—and to the strength of the economy and public security. By joining us, you will get to work with diverse teams of professionals who design, manage, and implement risk-centric solutions across a variety of domains. In the process, you will gain exposure to the risk-centric challenges faced in today’s world by organizations across a range of industry sectors and become subject matter experts in those areas. Our Risk and Financial Advisory services professionals help organizations effectively navigate business risks and opportunities—from strategic, reputation, and financial risks to operational, cyber, and regulatory risks—to gain competitive advantage. We apply our experience in ongoing business operations and corporate lifecycle events to help clients become stronger and more resilient. Our market-leading teams help clients embrace complexity to accelerate performance, disrupt through innovation, and lead in their industries. We use cutting-edge technology like AI/ML techniques, analytics, and RPA to solve Deloitte’s clients ‘most complex issues. Working in Risk and Financial Advisory at Deloitte US-India offices has the power to redefine your ambitions. The Team Cyber & Strategic Risk Deloitte’s CI/CD pipeline services are designed to help organizations accelerate software delivery, improve quality, and enhance agility by automating and optimizing the software development lifecycle. Work you’ll do Roles & Responsibilities: Proven hands-on experience designing, building, and maintaining CI/CD pipelines using tools such as Jenkins, GitLab CI/CD, Azure DevOps, or similar platforms. Strong knowledge of version control systems, especially Git, including branching, merging, and pull request workflows. Experience with scripting languages (e.g., Python, Bash, PowerShell) for automation tasks. Familiarity with configuration management tools (e.g., Ansible, Chef, Puppet). Experience with containerization (Docker) and orchestration (Kubernetes) technologies. Working knowledge of cloud platforms (AWS, Azure, or Google Cloud) and their CI/CD toolsets. Experience integrating automated testing and security scanning into pipelines. Familiarity with monitoring and logging tools (e.g., ELK Stack, Prometheus, Grafana). Required Skills Deep experience with multiple cloud providers (AWS, Azure, GCP), especially leveraging native CI/CD and DevOps services. Familiarity with hybrid or multi-cloud deployment strategies. Familiarity with deploying, managing, and scaling applications on Red Hat OpenShift, including working with OpenShift’s integrated CI/CD capabilities, container orchestration, and security features. Hands-on experience configuring, maintaining, and optimizing Jenkins pipelines for automated build, test, and deployment processes. Ability to integrate Jenkins with source control, artifact repositories, and other DevOps tools. Hands-on experience with Terraform, AWS CloudFormation, or Azure Resource Manager for automating infrastructure provisioning and management. Knowledge of DevSecOps principles, including integrating security tools (e.g., Snyk, Aqua, Checkmarx) into the pipeline. Skills in scripting languages (e.g., Groovy for Jenkins, Bash, Python) to customize pipeline steps and automate repetitive tasks. Experience with secrets management tools (e.g., HashiCorp Vault, AWS Secrets Manager). Experience with automated performance, integration, and security testing frameworks. Familiarity with test-driven development (TDD) and behavior-driven development (BDD) practices. Qualification Bachelor’s degree in Computer Science, Information Technology, Engineering, or a related field (or equivalent work experience). Relevant certifications are a plus (e.g., AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, Certified Kubernetes Administrator). Ability to follow coding standards and best practices as documented in team repositories. Willingness to perform code reviews and contribute to continuous improvement of pipeline processes. Experience onboarding and mentoring other team members on CI/CD best practices Good to have: Deep experience with multiple cloud providers (AWS, Azure, GCP), especially leveraging native CI/CD and DevOps services. Familiarity with hybrid or multi-cloud deployment strategies. Familiarity with deploying, managing, and scaling applications on Red Hat OpenShift, including working with OpenShift’s integrated CI/CD capabilities, container orchestration, and security features. Hands-on experience configuring, maintaining, and optimizing Jenkins pipelines for automated build, test, and deployment processes. Ability to integrate Jenkins with source control, artifact repositories, and other DevOps tools. Hands-on experience with Terraform, AWS CloudFormation, or Azure Resource Manager for automating infrastructure provisioning and management. Knowledge of DevSecOps principles, including integrating security tools (e.g., Snyk, Aqua, Checkmarx) into the pipeline. Skills in scripting languages (e.g., Groovy for Jenkins, Bash, Python) to customize pipeline steps and automate repetitive tasks. Experience with secrets management tools (e.g., HashiCorp Vault, AWS Secrets Manager). Experience with automated performance, integration, and security testing frameworks. Familiarity with test-driven development (TDD) and behavior-driven development (BDD) practices. How You’ll Grow At Deloitte, we’ve invested a great deal to create a rich environment in which our professionals can grow. We want all our people to develop in their own way, playing to their own strengths as they hone their leadership skills. And, as a part of our efforts, we provide our professionals with a variety of learning and networking opportunities—including exposure to leaders, sponsors, coaches, and challenging assignments—to help accelerate their careers along the way. No two people learn in the same way. So, we provide a range of resources including live classrooms, team-based learning, and eLearning. DU: The Leadership Center in India, our state-of-the-art, world-class learning Center in the Hyderabad offices is an extension of the Deloitte University (DU) in Westlake, Texas, and represents a tangible symbol of our commitment to our people’s growth and development. Explore DU: The Leadership Center in India . Deloitte’s culture Our positive and supportive culture encourages our people to do their best work every day. We celebrate individuals by recognizing their uniqueness and offering them the flexibility to make daily choices that can help them to be healthy, centered, confident, and aware. Deloitte is committed to achieving diversity within its workforce, and encourages all qualified applicants to apply, irrespective of gender, age, sexual orientation, disability, culture, religious and ethnic background. We offer well-being programs and are continuously looking for new ways to maintain a culture that is inclusive, invites authenticity, leverages our diversity, and where our people excel and lead healthy, happy lives. Learn more about Life at Deloitte. Corporate citizenship Deloitte is led by a purpose: to make an impact that matters. This purpose defines who we are and extends to relationships with Deloitte’s clients, our people and our communities. We believe that business has the power to inspire and transform. We focus on education, giving, skill-based volunteerism, and leadership to help drive positive social impact in our communities. Learn more about Deloitte’s impact on the world. Recruiting tips Finding the right job and preparing for the recruitment process can be tricky. Check out tips from our Deloitte recruiting professionals to set yourself up for success. Check out recruiting tips from Deloitte recruiters . Benefits We believe that to be an undisputed leader in professional services, we should equip you with the resources that can make a positive impact on your well-being journey. Our vision is to create a leadership culture focused on the development and well-being of our people. Here are some of our benefits and programs to support you and your family’s well-being needs. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you . Our people and culture Our people and our culture make Deloitte a place where leaders thrive. Get an inside look at the rich diversity of background, education, and experiences of our people. What impact will you make? Check out our professionals’ career journeys and be inspired by their stories. Professional development You want to make an impact. And we want you to make it. We can help you do that by providing you the culture, training, resources, and opportunities to help you grow and succeed as a professional. Learn more about our commitment to developing our people . © 2025. See Terms of Use for more information. Deloitte refers to one or more of Deloitte Touche Tohmatsu Limited, a UK private company limited by guarantee ("DTTL"), its network of member firms, and their related entities. DTTL and each of its member firms are legally separate and independent entities. DTTL (also referred to as "Deloitte Global") does not provide services to clients. In the United States, Deloitte refers to one or more of the US member firms of DTTL, their related entities that operate using the "Deloitte" name in the United States and their respective affiliates. Certain services may not be available to attest clients under the rules and regulations of public accounting. Please see www.deloitte.com/about to learn more about our global network of member firms. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Professional development At Deloitte, professionals have the opportunity to work with some of the best and discover what works best for them. Here, we prioritize professional growth, offering diverse learning and networking opportunities to help accelerate careers and enhance leadership skills. Our state-of-the-art DU: The Leadership Center in India, located in Hyderabad, represents a tangible symbol of our commitment to the holistic growth and development of our people. Explore DU: The Leadership Center in India . Benefits To Help You Thrive At Deloitte, we know that great people make a great organization. Our comprehensive rewards program helps us deliver a distinctly Deloitte experience that helps that empowers our professionals to thrive mentally, physically, and financially—and live their purpose. To support our professionals and their loved ones, we offer a broad range of benefits. Eligibility requirements may be based on role, tenure, type of employment and/ or other criteria. Learn more about what working at Deloitte can mean for you. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Requisition code: 308709

Posted 1 day ago

Apply

7.0 - 9.0 years

0 Lacs

Mohali district, India

On-site

We are Hiring for one of our Big MNC Client Job Title: Python Developer Location: Mohali, (Work from Office) Experience: 3-6 years Employment Type: Full-Time Working Model: On-site About the Role: We are looking for a passionate and experienced Python Developer with 7-9 years of hands-on development experience to join our engineering team. You will play a key role in designing, developing, and maintaining scalable backend systems and APIs that power our products and services. Key Responsibilities: Design, develop, and maintain efficient, reusable, and reliable Python code Build RESTful APIs and microservices using frameworks like FastAPI , Flask , or Django Integrate with relational and non-relational databases (e.g., PostgreSQL, MySQL, MongoDB) Write unit and integration tests to ensure software quality Collaborate with DevOps teams for CI/CD and containerization (e.g., Docker, Kubernetes) Participate in code reviews, design discussions, and Agile ceremonies Optimize applications for performance and scalability Required Skills & Qualifications: BTech / BE / MCA or equivalent in Computer Science or related field 4–5 years of professional Python development experience Strong grasp of OOPs , design patterns, and Pythonic code practices Experience with at least one Python web framework (FastAPI preferred) Hands-on experience with SQL and/or NoSQL databases Familiarity with Git, GitHub/GitLab workflows Experience with message queues like RabbitMQ, Kafka, or Redis Pub/Sub is a plus Understanding of software development lifecycle, Agile methodologies, and CI/CD pipelines Preferred Qualifications (Good to Have): Exposure to cloud platforms (AWS, GCP, or Azure) Experience with Nodejs or similar technologies Working knowledge of containerization and orchestration tools (Docker, Kubernetes) Experience integrating third-party APIs and services Prior experience in QSR, e-commerce, or real-time applications is a plus Soft Skills: Strong communication and collaboration skills Self-motivated, proactive, and capable of working in a fast-paced environment Ability to work independently and in a team Why Join Us? Opportunity to work on cutting-edge tech and challenging problems Flexible work environment and supportive team Competitive salary and performance-based bonuses

Posted 1 day ago

Apply

0 years

0 Lacs

Mohali district, India

On-site

Skill Sets: Expertise in ML/DL, model lifecycle management, and MLOps (MLflow, Kubeflow) Proficiency in Python, TensorFlow, PyTorch, Scikit-learn, and Hugging Face models Strong experience in NLP, fine-tuning transformer models, and dataset preparation Hands-on with cloud platforms (AWS, GCP, Azure) and scalable ML deployment (Sagemaker, Vertex AI) Experience in containerization (Docker, Kubernetes) and CI/CD pipelines Knowledge of distributed computing (Spark, Ray), vector databases (FAISS, Milvus), and model optimization (quantization, pruning) Familiarity with model evaluation, hyperparameter tuning, and model monitoring for drift detection

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Mandate Skills:- 5+Years "Java, Spring Boot, ReactJS, JavaScript Any Cloud (Preferred AWS)" Preferred:- Immediate joiner's. We are looking for a talented and experienced Full Stack Developer with strong expertise in Java, RESTful APIs, ReactJS, and JavaScript with 5+ years of experience. The ideal candidate will be responsible for designing and developing scalable web applications, ensuring seamless integration between front-end and back-end components. Key Responsibilities: Backend Development (Java): • Design, develop, and maintain scalable RESTful APIs using Java and Spring Boot. • Implement business logic, data access layers, and integration with external systems. • Ensure application performance, security, and scalability. • Write unit and integration tests to ensure code quality. Frontend Development (React + JavaScript): • Build responsive and interactive user interfaces using ReactJS and modern JavaScript (ES6+). • Work with state management libraries like Redux or Context API. • Ensure cross-browser compatibility and mobile responsiveness. • Integrate front-end components with RESTful APIs. General Responsibilities: • Collaborate with cross-functional teams including UI/UX designers, QA, and DevOps. • Participate in Agile ceremonies such as sprint planning, daily stand-ups, and retrospectives. • Conduct code reviews and provide constructive feedback. • Troubleshoot and resolve technical issues across the stack. • Stay updated with the latest trends and best practices in full stack development. Required Skills: • Strong proficiency in Java, Spring Boot, and RESTful API development. • Solid experience with ReactJS, JavaScript (ES6+), HTML5, and CSS3. • Familiarity with version control systems like Git. • Good understanding of software development lifecycle and Agile methodologies. Preferred Skills (Nice to Have): • Experience with MySQL or other relational databases. • Exposure to AWS or other cloud platforms. • Familiarity with CI/CD pipelines and containerization tools like Docker.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description You will work with diverse datasets, from structured logs to unstructured events, to build intelligent systems for event correlation, root cause analysis, predictive maintenance, and autonomous remediation, ultimately driving significant operational efficiencies and improving service availability. This position requires a blend of deep technical expertise in machine learning, a strong understanding of IT operations, and a commitment to operationalizing AI solutions at scale. Responsibilities As a Senior Data Scientist, your responsibilities will include, but are not limited to: Machine Learning Solution Development: Design, develop, and implement advanced machine learning models (supervised and unsupervised) to solve complex IT Operations problems, including Event Correlation, Anomaly Detection, Root Cause Analysis, Predictive Analytics, and Auto-Remediation. Leverage structured and unstructured datasets, performing extensive feature engineering and data preprocessing to optimize model performance. Apply strong statistical modeling, hypothesis testing, and experimental design principles to ensure rigorous model validation and reliable insights. AI/ML Product & Platform Development: Lead the end-to-end development of Data Science products, from conceptualization and prototyping to deployment and maintenance. Develop and deploy AI Agents for automating workflows in IT operations, particularly within Networks and CyberSecurity domains. Implement RAG (Retrieval Augmented Generation) based retrieval frameworks for state-of-the-art models to enhance contextual understanding and response generation. Adopt AI to detect and redact sensitive data in logs, and implement central data tagging for all logs to improve AI Model performance and governance. MLOps & Deployment: Drive the operationalization of machine learning models through robust MLOps/LLMOps practices, ensuring scalability, reliability, and maintainability. Implement models as a service via APIs, utilizing containerization technologies (Docker, Kubernetes) for efficient deployment and management. Design, build, and automate resilient Data Pipelines in cloud environments (GCP/Azure) using AI Agents and relevant cloud services. Cloud & DevOps Integration: Integrate data science solutions with existing IT infrastructure and AIOps platforms (e.g., IBM Cloud Paks, Moogsoft, BigPanda, Dynatrace). Enable and optimize AIOps features within Data Analytics tools, Monitoring tools, or dedicated AIOps platforms. Champion DevOps practices, including CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions), infrastructure-as-code (Terraform, Ansible, CloudFormation), and automation to streamline development and deployment workflows. Performance & Reliability: Monitor and optimize platform performance, ensuring systems are running efficiently and meeting defined Service Level Agreements (SLAs). Lead incident management efforts related to data science systems and implement continuous improvements to enhance reliability and resilience. Leadership & Collaboration: Translate complex business problems into data science solutions, understanding their strategic implications and potential business value. Collaborate effectively with cross-functional teams including engineering, product management, and operations to define project scope, requirements, and success metrics. Mentor junior data scientists and engineers, fostering a culture of technical excellence, continuous learning, and innovation. Clearly articulate complex technical concepts, findings, and recommendations to both technical and non-technical audiences, influencing decision-making and driving actionable outcomes. Best Practices: Uphold best engineering practices, including rigorous code reviews, comprehensive testing, and thorough documentation. Maintain a strong focus on building maintainable, scalable, and secure systems. Qualifications Education: Bachelors or Master's in Computer Science, Data Science, Artificial Intelligence, Machine Learning, Statistics, or a related quantitative field. Experience: 8+ years of IT and 5+yrs of progressive experience as a Data Scientist, with a significant focus on applying ML/AI in IT Operations, AIOps, or a related domain. Proven track record of building and deploying machine learning models into production environments. Demonstrated experience with MLOps/LLMOps principles and tools. Experience with designing and implementing microservices and serverless architectures. Hands-on experience with containerization technologies (Docker, Kubernetes). Technical Skills: Programming: Proficiency in at least one major programming language, preferably Python, sufficient to effectively communicate with and guide engineering teams. (Java is also a plus). Machine Learning: Strong theoretical and practical understanding of various ML algorithms (e.g., classification, regression, clustering, time-series analysis, deep learning) and their application to IT operational data. Cloud Platforms: Expertise with Google Cloud Platform (GCP) services is highly preferred, including Dataflow, Pub/Sub, Cloud Logging, Compute Engine, Kubernetes Engine, Cloud Functions, BigQuery, Cloud Storage, and Vertex AI. Experience with other major cloud providers (AWS, Azure) is also valuable. DevOps & Tools: Experience with CI/CD pipelines (e.g., Jenkins, GitLab CI, GitHub Actions). Familiarity with infrastructure-as-code tools (e.g., Terraform, Ansible, CloudFormation). AIOps/Observability: Knowledge of AIOps platforms such as IBM Cloud Paks, Moogsoft, BigPanda, Dynatrace, etc. Experience with log analytics platforms and data tagging strategies. Soft Skills: Exceptional analytical and problem-solving skills, with a track record of tackling ambiguous and complex challenges independently. Strong communication and presentation skills, with the ability to articulate complex technical concepts and findings to diverse audiences and influence stakeholders. Ability to take end-to-end ownership of data science projects. Commitment to best engineering practices, including code reviews, testing, and documentation. A strong desire to stay current with the latest advancements in AI, ML, and cloud technologies.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description You will work with diverse datasets, from structured logs to unstructured events, to build intelligent systems for event correlation, root cause analysis, predictive maintenance, and autonomous remediation, ultimately driving significant operational efficiencies and improving service availability. This position requires a blend of deep technical expertise in machine learning, a strong understanding of IT operations, and a commitment to operationalizing AI solutions at scale. Responsibilities As a Senior Data Engineer your responsibilities will include, but are not limited to: Machine Learning Solution Development: Design, develop, and implement advanced machine learning models (supervised and unsupervised) to solve complex IT Operations problems, including Event Correlation, Anomaly Detection, Root Cause Analysis, Predictive Analytics, and Auto-Remediation. Leverage structured and unstructured datasets, performing extensive feature engineering and data preprocessing to optimize model performance. Apply strong statistical modeling, hypothesis testing, and experimental design principles to ensure rigorous model validation and reliable insights. AI/ML Product & Platform Development: Lead the end-to-end development of Data Science products, from conceptualization and prototyping to deployment and maintenance. Develop and deploy AI Agents for automating workflows in IT operations, particularly within Networks and CyberSecurity domains. Implement RAG (Retrieval Augmented Generation) based retrieval frameworks for state-of-the-art models to enhance contextual understanding and response generation. Adopt AI to detect and redact sensitive data in logs, and implement central data tagging for all logs to improve AI Model performance and governance. MLOps & Deployment: Drive the operationalization of machine learning models through robust MLOps/LLMOps practices, ensuring scalability, reliability, and maintainability. Implement models as a service via APIs, utilizing containerization technologies (Docker, Kubernetes) for efficient deployment and management. Design, build, and automate resilient Data Pipelines in cloud environments (GCP/Azure) using AI Agents and relevant cloud services. Cloud & DevOps Integration: Integrate data science solutions with existing IT infrastructure and AIOps platforms (e.g., IBM Cloud Paks, Dynatrace). Enable and optimize AIOps features within Data Analytics tools, Monitoring tools, or dedicated AIOps platforms. Champion DevOps practices, including CI/CD pipelines (Jenkins, GitLab CI, GitHub Actions), infrastructure-as-code (Terraform, Ansible, CloudFormation), and automation to streamline development and deployment workflows. Performance & Reliability: Monitor and optimize platform performance, ensuring systems are running efficiently and meeting defined Service Level Agreements (SLAs). Lead incident management efforts related to data science systems and implement continuous improvements to enhance reliability and resilience. Leadership & Collaboration: Translate complex business problems into data science solutions, understanding their strategic implications and potential business value. Collaborate effectively with cross-functional teams including engineering, product management, and operations to define project scope, requirements, and success metrics. Mentor junior data scientists and engineers, fostering a culture of technical excellence, continuous learning, and innovation. Clearly articulate complex technical concepts, findings, and recommendations to both technical and non-technical audiences, influencing decision-making and driving actionable outcomes. Best Practices: Uphold best engineering practices, including rigorous code reviews, comprehensive testing, and thorough documentation. Maintain a strong focus on building maintainable, scalable, and secure systems. Qualifications Education: Master's or Ph.D. in Computer Science, Data Science, Artificial Intelligence, Machine Learning, Statistics, or a related quantitative field. Experience: 8+ years of progressive experience as a Data Scientist, with a significant focus on applying ML/AI in IT Operations, AIOps, or a related domain. Proven track record of building and deploying machine learning models into production environments. Demonstrated experience with MLOps/LLMOps principles and tools. Experience with designing and implementing microservices and serverless architectures. Hands-on experience with containerization technologies (Docker, Kubernetes). Technical Skills: Programming: Proficiency in at least one major programming language, preferably Python, sufficient to effectively communicate with and guide engineering teams. (Java is also a plus). Machine Learning: Strong theoretical and practical understanding of various ML algorithms (e.g., classification, regression, clustering, time-series analysis, deep learning) and their application to IT operational data. Cloud Platforms: Expertise with Google Cloud Platform (GCP) services is highly preferred, including Dataflow, Pub/Sub, Cloud Logging, Compute Engine, Kubernetes Engine, Cloud Functions, BigQuery, Cloud Storage, and Vertex AI. Experience with other major cloud providers (AWS, Azure) is also valuable. DevOps & Tools: Experience with CI/CD pipelines (e.g., Jenkins, GitLab CI, GitHub Actions). Familiarity with infrastructure-as-code tools (e.g., Terraform, Ansible, CloudFormation). AIOps/Observability: Knowledge of AIOps platforms such as IBM Cloud Paks, Moogsoft, BigPanda, Dynatrace, etc. Experience with log analytics platforms and data tagging strategies. Soft Skills: Exceptional analytical and problem-solving skills, with a track record of tackling ambiguous and complex challenges independently. Strong communication and presentation skills, with the ability to articulate complex technical concepts and findings to diverse audiences and influence stakeholders. Ability to take end-to-end ownership of data science projects. Commitment to best engineering practices, including code reviews, testing, and documentation. A strong desire to stay current with the latest advancements in AI, ML, and cloud technologies.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Role Overview: We are looking for a Cloud & DevOps Engineer to manage cloud deployments, automate CI/CD pipelines, and ensure smooth development-to-production workflows. Experience: 8+ years of relevant experience in DevOps / Cloud engineering Key Responsibilities: Manage and deploy cloud infrastructure (AWS preferred) Build and maintain CI/CD pipelines using Jenkins and GitHub Implement automation using Ansible and manage code quality with SonarQube Collaborate with development teams to ensure smooth releases Monitor, troubleshoot, and optimize cloud and on-prem deployments Provide innovative solutions to complex business/technical problems Identify and resolve technical, integration and development issues Implement best practices, standards and processes to ensure quality of the final product Participate in status meetings and provide regular updates Required Skills: Technical: Strong expertise in AWS cloud services Hands-on experience with CI/CD tools: Git, GitHub, and Jenkins Proficiency in automation and configuration management using Ansible Experience in implementing code quality and security checks using SonarQube Desired Skills: Good knowledge of Linux/Unix for deployment and troubleshooting Experience working in Agile environments – Scrum and/or SAFe frameworks Exposure to DevOps culture and practices Understanding of software engineering lifecycle – Development, Testing, and Production Support Experience in production issue handling and transformation support management Good to Have Skills: Exposure to containerization (Docker/Kubernetes)

Posted 1 day ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

P2-C3-TSTS AWS CDK with Type Script, Cloud formation template. AWS services like Redshift Executing Grants, Store Procedures, Queries, Redshift Spectrum to Query S3, Glue Execution Roles, JOB Debugging, IAM roles Creation with Fine Grained access, Integration & Deployment, KMS keys CMK and DEK, Secrets Manager, Airflow DAG Creation & Execution, SFTP, AWS Lambda Serverless Execution & Debugging, S3 Object storage, Life Cycle Configuration, Resource base policy, Encryption, Event triggers using Lambda Event Bridge with Rules. Knowledge on working with AWS Redshift SQL workbench on executing grants. Strong understanding of networking concepts, security, and cloud architecture. Experience with monitoring tools such as CloudWatch, Prometheus, or similar. Familiarity with containerization Docker, Kubernetes is a plus. Excellent problem-solving skills and ability to work in a fast-paced environment. Redshift SQL

Posted 1 day ago

Apply

5.0 - 7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Job Description We are looking for a Performance Engineer with 5 to 7 years strong experience on below skills 5 to 7 years of experience with JMeter to perform end-to-end Performance Testing of software applications Strong experience in developing JMeter scripts for testing Web based applications built using Angular, React with backend Java, Python & .NET Strong hands-on skills in Java and/or Python programming languages Fair understanding of AI, especially Prompt Engineering using LLM or APIs including a good working knowledge of integrating AI features with day-to-day automation tasks DevOps experience especially integrating load test tools with pipeline (using CI/CD tools such as Tekton or CloudBuild or GITHUB ACTIONS) and containerization of testing tools using platforms like dockers, Kubernates Expertise in production log analysis for workload modelling include ability to analyze client and server-side metrics to validate performance of applications. Deep understanding of Dynatrace or New Relic or AppDynamics e.t.c to identify performance bottlenecks and provide performance engineering recommendations Ability to contribute to performance engineering of applications like SW/HW Sizing, Network, Server & code optimization Exposure to GCP or similar Cloud Platform (Azure or Amazon) & ability to interpret metrics using Cloud monitoring tools (OpenShift CaaS, CloudRun metrics dashboards e.t.c) Strong problem solving and analytical skills, Ability to work independently and Self-Motivated Excellent written and verbal communication skills, in English. Responsibilities Deep understanding of Dynatrace or New Relic or AppDynamics e.t.c to identify performance bottlenecks and provide performance engineering recommendations Ability to contribute to performance engineering of applications like SW/HW Sizing, Network, Server & code optimization Exposure to GCP or similar Cloud Platform (Azure or Amazon) & ability to interpret metrics using Cloud monitoring tools (OpenShift CaaS, CloudRun metrics dashboards e.t.c) Strong problem solving and analytical skills, Ability to work independently and Self-Motivated Excellent written and verbal communication skills, in English. Qualifications 5 to 7 years of experience with JMeter to perform end-to-end Performance Testing of software applications

Posted 1 day ago

Apply

5.0 years

5 - 8 Lacs

Hyderābād

On-site

Job Description for Senior DevOps Engineer Our Company At Teradata, we believe that people thrive when empowered with better information. That’s why we built the most complete cloud analytics and data platform for AI. By delivering harmonized data, trusted AI, and faster innovation, we uplift and empower our customers—and our customers’ customers—to make better, more confident decisions. The world’s top companies across every major industry trust Teradata to improve business performance, enrich customer experiences, and fully integrate data across the enterprise. Location: Hybrid – Hyderabad, India Job Summary We are looking for a senior DevOps engineer to help build, automate, and validate end-to-end machine learning pipelines at enterprise scale. This role combines responsibilities across DevOps engineering and Quality Engineering (QE), enabling smooth CI/CD workflows while ensuring robust test automation, release quality, and operational excellence. You’ll work closely with data scientists, ML engineers, platform teams, and product managers to operationalize AI/ML workflows, enabling smooth transitions from experimentation to production. You’ll also define test strategies, automate validation pipelines, and champion the overall quality of Teradata’s AI/ML platform and analytic products. What You’ll Do Collaborate with AI/ML teams to automate and maintain automated test frameworks for Teradata features, SQL-based components, and analytics functions. Define and implement end-to-end test strategies for analytic products. Own quality gates in CI pipelines to block releases with critical bugs. Collaborate with the Agentic AI team to validate models used by intelligent agents (e.g., LLM-based systems). Who You'll Work With You’ll collaborate with: AI/ML engineers and data scientists building enterprise-grade models and intelligent agents. Product managers and quality leaders defining success criteria and customer expectations for model-based features. Release management teams ensuring delivery standards and model lifecycle hygiene. What We’re Looking For Minimum Requirements 5+ years of industry experience in QA, DevOps, or software engineering role. Solid coding skills in Python, including test frameworks (e.g., PyTest, unittest) and data libraries (pandas, NumPy). Strong experience in SQL and databases. Experience in building and running test automation in a CI/CD pipeline. Experience in AWS, Azure, or Google Cloud. Familiarity with containerization (Docker) and cloud platforms (AWS, GCP, or Azure). Working knowledge of Linux-based systems and networking fundamentals. Preferred Qualifications Bachelor’s or Master’s in Computer Science, Artificial Intelligence, or a related field (or equivalent experience). Familiarity with Teradata Vantage, model scoring, or cloud-native deployments (AWS/GCP/Azure). Prior experience testing or validating ML or analytics-based applications. Experience in testing data pipelines, ETL flows, or large-scale data processing systems. Strong communication and documentation skills; ability to write test plans and share findings with both technical and non-technical audiences #LI-NT1

Posted 1 day ago

Apply

10.0 years

7 - 8 Lacs

Hyderābād

On-site

About the Role We are seeking a highly skilled Technical Architect – Java to lead the design, architecture, and implementation of enterprise-grade applications. The ideal candidate will have deep expertise in Java technologies, modern cloud-native architectures, and be adept at translating business requirements into scalable and robust technical solutions. Key Responsibilities Architect and design highly scalable, performant, and secure enterprise applications. Define application architecture , integration patterns, and best practices for development. Lead technical discussions , solutioning, and design reviews with cross-functional teams. Collaborate with stakeholders to understand functional and non-functional requirements . Provide technical leadership and mentorship to engineering teams. Drive cloud adoption strategies and deployment on AWS using Docker and Kubernetes. Ensure solutions are aligned with industry best practices in scalability, reliability, and maintainability. Oversee the end-to-end lifecycle of application delivery from architecture to production. Required Skills & Qualifications Core Expertise: Java 8 & above Strong understanding of OOPS principles and design patterns Spring Boot and Microservices architecture SQL Server / PostgreSQL ELK Stack (Elasticsearch, Logstash, Kibana) for logging and monitoring Kafka / RabbitMQ for messaging and event-driven systems AWS cloud services and architecture best practices Docker & Kubernetes for containerization and orchestration React.js for front-end development integration Experience: Minimum 10 years in software development with at least 3 years in a technical architect role. Proven track record of designing and delivering large-scale, distributed systems. Soft Skills: Strong communication and interpersonal skills. Excellent problem-solving and analytical abilities. Ability to work in a fast-paced, collaborative environment. Preferred Qualifications Certification in AWS Solutions Architect or Kubernetes . Experience with DevOps CI/CD pipelines and automation tools. Exposure to security frameworks and compliance standards.

Posted 1 day ago

Apply

0 years

0 Lacs

Hyderābād

On-site

Req ID: 328089 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Technical Access services to join our team in Hyderabad, Telangana (IN-TG), India (IN). Purpose Drives technical implementation of processes or capabilities to address compliance, security and general IT risks to ensure that IT supports the business objectives of the group. Oversees implementation of technologies and programs with focus on local and regional deployments. Job Accountabilities Key Accountabilities Responsible for effective delivery of projects and services on mitigation of risks and control implementation leading to effective risk management. Assists projects to identify and align business with technical capabilities, design security controls and test their effectiveness ensuring the product implemented addresses both business and security needs. Applies and supports IT security, risk and compliance technologies leading to higher adoption rates. Provides consultancy on security architecture and product standards ensuring business meets the minimum-security standard. Participates in supplier assurance management and identifies gaps in services provided. Analyzes security response processes leading to effective risk reduction. Provides support and consultancy for gathering of security metrics information leading to identification of areas for improvement. Contributes to the execution of security initiatives by deploying, configuring and supporting security technologies especially those utilizing physical and virtual architectures. Contributes to the development of comprehensive project plans and drive the execution of these plans to ensure project success. Act as a project assurance representative on many projects simultaneously. Ensure that project objectives are delivered on time and meet stakeholder expectations for quality. Manage project task execution independently Provide security subject matter expertise, evaluating proposals and recommending available solutions. Contributes to projects and ensures direction is consistent with the business goals and objectives identified for a given solution; provides guidance to engaged, 3rd party contractors and consultants. Updates and delivers status of initiatives; manages plan to remediate compliance gaps and supports audit initiatives. Leads implementation and support of processes, security supplier interactions, application security, and incident response. Responsible for implementing and enforcing IT policies, stakeholder engagement, security technology implementation, security architecture, and security supplier management and engagement. Contribute to the delivery of assigned IT projects in own area of expertise for specific lines of business, collaborating with IT colleagues from across the wider function to agree an approach for project / program management. Support maintenance of IT security capabilities in alignment with defined service level agreements. Extended Hours during Peak Periods/Shift Work/Holiday Work, as required Regular Predictable Attendance Skills Functional/Technical Skills Application Design, Architecture - Proficiency Level Intermediate Change Control - Proficiency Level Intermediate Information Security Architecture - Proficiency - Level Intermediate Information Security Technologies - Proficiency Level Intermediate IT Service - Management (ITSM) - Proficiency Level Basic Network and Internet Security - Proficiency Level Intermediate System and Technology Integration - Proficiency Level Intermediate System Development Life Cycle - Proficiency Level Intermediate Technical Troubleshooting - Proficiency Level Intermediate Risk Management - Proficiency Level Intermediate Specific technical Skills Intermediate proficiency in scripting and general automation (Python or Powershell) Intermediate proficiency in Cloud Automation (AWS, Azure and GCP) Basic proficiency in containerization and orchestration (Docker, Kuberenetes) Basic proficiency in CI/CD Pipelines (Jenkins, GitHub Actions) Basic proficiency in IaC (Terraform) Required Bachelor's degree and 5 or more years of experience in the information technology area IT Governance experience Project management experience Experience with software development lifecycle process Experience in Information Security and User Experience Design MS Office experience Experience with O365 SharePoint / Teams Technical Writing skills Knowledge of private/public cloud services, concepts of cloud security of ZERO trust About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us. This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here. If you'd like more information on your EEO rights under the law, please click here. For Pay Transparency information, please click here.

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies