Jobs
Interviews

76 Tekton Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

10 - 15 Lacs

gurugram

Work from Office

5-8 years of experience in deploy, manage, and operate Red Hat OpenShift clusters in on-prem, hybrid, or public cloud environments. Administer Persistent Volumes (PV) and Persistent Volume Claims (PVC) using CephFS, RBD, NFS, and cloud-native storage solutions. Build and maintain CI/CD pipelines using OpenShift Pipelines (Tekton), Jenkins, or GitOps tools like Argo CD. Design and troubleshoot persistent storage provisioning using StorageClasses, AccessModes (RWO, RWX). Manage Ingress Controllers, Routes, Services, and NetworkPolicies for secure traffic flow. Implement and troubleshoot Service Mesh (Istio/OSM) and API Gateways (e.g., 3scale) where applicable. Implement observability using Prometheus, Grafana, Alertmanager. Set up centralized logging via Loki. Strong understanding of containerization technologies

Posted 4 days ago

Apply

12.0 - 16.0 years

0 Lacs

chennai, tamil nadu

On-site

We are looking for a highly skilled and experienced Java Senior Developer to join our team in the role of Vice President. Your responsibilities will include leading the design, development, and implementation of critical enterprise-level applications, ensuring performance, scalability, and security within a fast-paced and distributed environment. Deep technical expertise in Java and related technologies, strong leadership capabilities, and a commitment to delivering high-quality software solutions are essential for success in this role. Experience within the financial services industry is considered a strong advantage. As a Java Senior Developer, your responsibilities will involve Technical Leadership & System Design. You will lead and manage technical components to ensure alignment with business objectives. Your role will also include defining architecture and leading the implementation of robust, scalable, and secure cloud-based applications, often leveraging microservices architecture and RESTful APIs. Additionally, you will be responsible for Software Development & Engineering Excellence. This will involve designing, developing, and maintaining high-quality Java applications using modern Java versions and frameworks like Spring Boot. You will be expected to write clean, efficient, well-documented, and testable code, adhering to SOLID principles and software design best practices. You will play a key role in ensuring Code Quality & Best Practices. This will include actively contributing to hands-on coding, conducting thorough code reviews, and providing constructive feedback to team members to ensure adherence to coding standards, best practices, and architectural guidelines. You will be instrumental in driving the adoption of modern engineering practices including Agile, DevOps, CI/CD pipelines, and automated testing. Collaboration & Communication will be another crucial aspect of your role. You will collaborate closely with product managers, business analysts, QA engineers, architects, and other stakeholders to understand requirements, define innovative solutions, and deliver results that meet business needs. Effective communication with various teams will be essential for the success of projects. Mentorship & Guidance is also a significant part of this role. You will serve as an advisor, mentor, and coach to junior and mid-level developers, fostering a culture of technical excellence and continuous improvement. Providing technical guidance and ensuring alignment with architectural blueprints and best practices will be key responsibilities. Continuous Learning & Strategy will be expected from you as well. Staying updated on emerging technologies and industry trends, evaluating their potential application to systems, and contributing to the overall technical strategy and roadmap are important aspects of this position. Qualifications - Bachelor's or Master's degree in Computer Science, Engineering, or a related field. - 12+ years of hands-on experience in Java software development, with multiple years in a senior or lead role. Experience in financial services is often preferred. - Strong proficiency in Java (Java 17 is often preferred), Spring Boot, and the Spring Framework. - Extensive experience in designing and implementing microservices architecture and RESTful APIs. - Experience with cloud platforms (e.g., AWS, Azure, GCP) and containerization technologies (e.g., Docker, Kubernetes, OpenShift). - Familiarity with various database technologies (SQL, NoSQL like Oracle, MongoDB) and experience with message brokers (e.g., Kafka, RabbitMQ, Solace). - Strong understanding and hands-on experience with CI/CD pipelines and DevOps practices. - Proficiency in testing frameworks (JUnit, Mockito, Cucumber) and an automation-first mindset with TDD/BDD. - Excellent problem-solving, analytical, and critical thinking skills. Strong communication, collaboration, and interpersonal skills. Ability to work independently, manage multiple priorities, and adapt to fast-paced agile environments. Leadership skills, including the ability to mentor and influence.,

Posted 6 days ago

Apply

10.0 - 12.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Description Deep technical skills: Hands-on coding, debugging knowledge in Java, J2EE, Spring boot microservices, Spring batch, Postgres, Redis, GraphQL with knowledge of cloud platforms preferably GCP. GCP: Cloud Build and Cloud Run, Secret Manager, Pub Sub, Schedulers Code Quality Tools: Fossa, SonarQube, Checkmarx, Cycode, 42Crunch Strong team leadership: Mentorship, code reviews, support. Proactive risk management: Identifying and mitigating technical risks. Delivery focus: Meeting sprint goals, high-quality code. Positive team attitude: Collaboration, knowledge sharing & Effective communication and ability to work in a large diverse team. Experience of Software Engineering Craftmanship techniques and best practices Practical understanding / usage of version control systems (Git/GitHub) and CI/CD tools (Cloud Build, Tekton) Experience in API automation tool Newman and Jmeter. Responsibilities Experience piloting new technologies and designing implementation strategies Experience designing and implementing enterprise best practices regarding existing or new technology/tooling Experience of senior responsibilities including: Dev Code Reviews Change management Building technical roadmaps/backlogs Exposure or experience in the following Skills and Techniques: Agile/PDO Ceremonies People & Skills Coaching Coordination and logistical planning Business focused cascades of technical strategies and/or roadmaps Experience using Test Driven Development (TDD) and Behaviour Driven Development (BDD) Qualifications B.E / BTech in Computer Science with 10+yrs of Experience in software development with Java/J2EE & GraphQL with knowledge of cloud platforms preferably GCP. Show more Show less

Posted 6 days ago

Apply

10.0 - 15.0 years

25 - 35 Lacs

chennai

Work from Office

We are looking for an experienced Senior Software Engineer with strong expertise in full-stack development and cloud technologies. The ideal candidate will be responsible for designing, developing, testing, and maintaining scalable software applications and products. You will be deeply involved in the entire software development lifecycleright from architecture design to deployment—while collaborating with cross-functional teams and driving user-centric solutions. Key Responsibilities Engage with customers to understand use cases, pain points, and requirements, and advocate for user-focused solutions. Design, develop, and deliver software applications using various tools, frameworks, and methodologies (Agile). Assess business and technical requirements to determine the best technology stack, integration methods, and deployment strategy. Create high-level software architecture designs, including structure, components, and interfaces. Collaborate closely with product owners, designers, and architects to align on solutions. Define and implement software testing strategies, policies, and best practices. Continuously optimize application performance and adopt new technologies to enhance efficiency. Apply programming practices such as Test-Driven Development (TDD), Continuous Integration (CI), and Continuous Delivery (CD). Implement secure coding practices, including encryption and anonymization of user data. Develop user-friendly, interactive front-end interfaces and robust back-end services (APIs, microservices). Leverage cloud platforms and emerging technologies to build future-ready solutions. Skills Required Programming & Data Engineering: Python, PySpark, API Development, SQL/Postgres Cloud Platforms & Tools: Google Cloud Platform (BigQuery, Cloud Run, Dataflow, Dataproc, Data Fusion, Cloud SQL), IBM WebSphere Application Server Infrastructure & DevOps: Terraform, Tekton, Airflow Other Expertise: MDM (Master Data Management), application optimization, microservices Experience Required 10+ years of experience in IT, with 8+ years in software development. Strong practical experience in at least 2 programming languages OR advanced expertise in 1 language. Experience mentoring and guiding engineering teams.

Posted 1 week ago

Apply

5.0 - 10.0 years

17 - 20 Lacs

chennai

Work from Office

We are hiring for the role of Backend Developer Key Responsibilities Design, develop, and maintain scalable backend services using Java and Spring Boot . Build and optimize RESTful APIs ensuring high performance and security. Work with Postgres databases for data modeling, queries, and optimization. Deploy and manage applications on Google Cloud Platform (GCP) . Implement and manage CI/CD pipelines (preferably Tekton). Collaborate with cross-functional teams (Product, QA, DevOps) to deliver high-quality solutions. Participate in code reviews, troubleshoot production issues, and improve system reliability. Skills Required Strong proficiency in Java and Spring Boot . Expertise in REST API development . Hands-on experience with Postgres DB . GCP experience is a must (deployment, services, integration). Experience with CI/CD pipelines (Tekton preferred). Strong problem-solving and debugging skills. Experience Required 5+ years of professional experience in backend development. Proven track record in building and deploying scalable backend systems.

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

pune, maharashtra

On-site

As a Senior Web Developer in Wholesale Lending Credit Risk (WLCR) technology team at Citi, you will play a crucial role in serving the Institutional Credit Management (ICM) by partnering with onshore and offshore teams to design and deliver innovative technology solutions for the front office Credit Risk Business. This entails being a core member of the technology team and implementing projects based on Java, SpringBoot, and Kafka using the latest technologies. This role presents an excellent opportunity to immerse yourself in the Wholesale Lending Division and gain exposure to business and technology initiatives aimed at maintaining a lead position among competitors. Your key responsibilities will include effectively interacting and collaborating with the development team, communicating development progress to the Project Lead, working closely with developers across different teams to implement business solutions, investigating possible bug scenarios and production support issues, and monitoring and controlling all phases of the development process. You will also be expected to utilize your in-depth specialty knowledge of applications development to analyze complex problems, provide evaluations of business processes and industry standards, consult with users/clients and technology groups on issues, recommend advanced programming solutions, and ensure essential procedures are followed while defining operating standards and processes. Furthermore, you will serve as an advisor or coach to new or lower-level analysts, operate with a limited level of direct supervision, exercise independence of judgment and autonomy, and act as a Subject Matter Expert (SME) to senior stakeholders and/or other team members. It is crucial that you appropriately assess risks when making business decisions, showcasing particular consideration for the firm's reputation, and safeguarding Citigroup, its clients, and assets by driving compliance with applicable laws, rules, and regulations. To be successful in this role, you should have at least 7 years of relevant experience, proficiency in systems analysis and programming of software applications (Java, SpringBoot, Oracle), experience working with SOA & Micro-services utilizing REST, and experience in developing cloud-ready applications and deployment pipelines on large-scale container platform clusters. Additionally, familiarity with Continuous Integration and Continuous Delivery environments, industry standard best practices, debugging, tuning, optimizing components, SDLC lifecycle methodologies, and excellent written and oral communication skills are essential. Experience in developing applications in the Financial Services industry is preferred, and the ability to work under pressure and manage deadlines or unexpected changes in expectations or requirements is required. Education-wise, a Bachelor's degree or University degree in Computer Science or equivalent is necessary for this role. Citi is an equal opportunity and affirmative action employer, offering career opportunities to all qualified interested applicants. If you are a person with a disability and require reasonable accommodations to utilize search tools or apply for a career opportunity, please review the Accessibility at Citi information.,

Posted 1 week ago

Apply

10.0 - 14.0 years

4 - 9 Lacs

chennai

Work from Office

Java 17, Spring Boot, JPA, Spring Security, Flyway, Angular, Apigee, GCP, SonarQube, 42Crunch, FOSSA, Cycode, Git, Gradle, Tekton, CI/CD, SQL Server, PostgreSQL, Agile, TDD, BDD, Full Stack Developer, Senior Java Developer, Java Backend, Cloud Develo

Posted 1 week ago

Apply

5.0 - 10.0 years

1 - 1 Lacs

chennai

Hybrid

Overview: TekWissen is a global workforce management provider throughout India and many other countries in the world. The below clientis a global company with shared ideals and a deep sense of family. From our earliest days as a pioneer of modern transportation, we have sought to make the world a better place one that benefits lives, communities and the planet Job Title: Data Engineering Engineer II Location: Chennai Work Type: Hybrid Position Description: Employees in this job function are responsible for designing, building, and maintaining data solutions including data infrastructure, pipelines, etc. for collecting, storing, processing and analyzing large volumes of data efficiently and accurately Key Responsibilities: Collaborate with business and technology stakeholders to understand current and future data requirements Design, build and maintain reliable, efficient and scalable data infrastructure for data collection, storage, transformation, and analysis Plan, design, build and maintain scalable data solutions including data pipelines, data models, and applications for efficient and reliable data workflow Design, implement and maintain existing and future data platforms like data warehouses, data lakes, data lakehouse etc. for structured and unstructured data Design and develop analytical tools, algorithms, and programs to support data engineering activities like writing scripts and automating tasks Ensure optimum performance and identify improvement opportunities Skills Required: Google Cloud Platform - Biq Query, Data Flow, Dataproc, Data Fusion, TERRAFORM, Tekton,Cloud SQL, AIRFLOW, POSTGRES, Airflow PySpark, Python, API Skills Preferred: GenAI Experience Required: Engineer 2 Exp: 4+ years Data Engineering work experience Experience Preferred: Strong proficiency and hands-on experience in both Python(Must-have) and Java(Nice to have). Experience building and maintaining data pipelines (batch or streaming) preferably on Cloud platforms(especially GCP). Experience with at least one major distributed data processing framework (e.g., DBT, DataForm, Apache Spark, Apache Flink, or similar). Experience with workflow orchestration tools (e.g., Apache Airflow, Qlik replicate etc). Experience working with relational databases (SQL) and understanding of data modeling principles. Experience with cloud platforms (Preferably GCP. AWS or Azure will also do) and relevant data services (e.g., BigQuery, GCS, Data Factory, Dataproc, Dataflow, S3, EMR, Glue etc.). Experience with data warehousing concepts and platforms (BigQuery, Snowflake, Redshift etc.). Understanding of concepts related to integrating or deploying machine learning models into production systems. Experience working in an Agile development environment & hands-on in any Agile work management tool(Rally, JIRA etc.). Experience with version control systems, particularly Git. Solid problem-solving, debugging, and analytical skills. Excellent communication and collaboration skills. Experience working in a production support team (L2/L3) for operational support. Preferred Skills and Qualifications (Nice to Have): Familiarity with data quality and data governance concepts. Experience building and consuming APIs (REST, gRPC) related to data or model serving. Bachelor's or Master's degree in Computer Science, Engineering, Data Science, or a related field. Education Required: Bachelor's Degree Education Preferred: Bachelor's Degree TekWissen Group is an equal opportunity employer supporting workforce diversity.

Posted 1 week ago

Apply

6.0 - 11.0 years

13 - 23 Lacs

chennai

Work from Office

Job Description: This consultant level Infrastructure Engineer would be responsible for automating, building and operating IT hardware and software resources in on-prem, cloud, or hybrid scenarios in support of internal customer needs. They are responsible for a wide variety of activities that orchestrate operations of Linux-based IT Infrastructure development, such as automation of administrative tasks across large fleets of servers, capacity planning for workloads, configuring and securing the base operating systems, creating, improving, and providing metrics, observability analytics, installing and maintaining security and manageability agents, etc. They would also use cloud and other off-prem offerings to automate integration with cost-efficient solutions such as storage-as-a-service, containers, and Kubernetes that operate the required workloads. Education: • Bachelor of Science in Computer science, Information Technology or other related discipline. Experience: Minimum of 5+ years of progressively growing responsibilities experience in Linux OS Engineering and automation, with firm understanding of two or more of the mainstream Linux distributions. Skills Required: Linux, Jenkins, CI-CD, Tekton, Automation, Linux - Other, redhat, ubuntu, Cloud Infrastructure, TERRAFORM, Ansible, Powershell, deployment, GitHub, ITIL (Infrastructure Library) Service Manager, Performance Tuning, Technical Assistance, Technical Analysis, Technical Support, Technical Documentation, Tech Engineering, Systems Engineering Skills Preferred: Ability to communicate and work with cross-functional teams and all levels of management , Ansible, Application Support, Automated Scripting, CI/CD, Python, Java, Kubernetes, Openshift , Testing, VMware, Computer Hardware, Hardware Experience Experience Required: Experience: Minimum of 5+ years of progressively growing responsibilities experience in Linux OS Engineering and automation, with firm understanding of two or more of the mainstream Linux distributions. Profile Description: This Consultant-level Infrastructure Engineer would be responsible for automating, building, and operating IT hardware and software resources in on-prem, cloud, or hybrid scenarios in support of internal customer needs. Responsible for planning, configuring, and maintaining servers and the base operating systems, creating and optimizing virtualized environments, default agents, and providing observability and reliability metrics. Work Requirements: Requires knowledge of the following technologies: 1. Working knowledge of at Ansible and Ansible Automation Platform. 2. Bourne Shell scripting 3. One or more higher level languages, such as python, ruby, perl, java, groovy, etc to understand maintainability and other software engineering principles 4. Knowledge of at least 2 different Linux distributions 5. Working knowledge of Kubernetes, with Openshift experience preferred 6. Working knowledge of at least one Pipeline tool, such as Tekton, Jenkins, CircleCI, etc. 7. Understanding of code testing and test-driven development 8. Understanding of virtualization (such as vmware, openshift, etc) and server hardware hardware (such as HPE, Dell, SuperMicro, etc) manageability technologies. Experience Preferred: Job Responsibilities: 1. Automate the creation and testing of Linux Server templates for virtual machines and physical hardware using a Jenkins and Tekton CI/CD pipelines 2. Automate the installation, upgrade, and configuration of Linux Operating Systems, including Suse, Redhat, and Ubuntu, on on-prem and cloud infrastructure using a multitude of tools such as Terraform, Ansible, Bourne Shell, and PowerShell 3. Automate manual processes associated with infrastructure deployment via GitOps and infrastructure-as-code 4. Manage security requirements, performance optimizations, and technical direction for the Linux Server Operating Systems 5. Provide L3 Engineering support to Operations teams for all aspects of the Linux Server Operating Systems 6. Provide cross-team support for deploying infrastructure to multiple target environments, including on-prem, plant, and cloud Education Required: Bachelor's Degree Education Preferred: Master's Degree Additional Information : Experience Required : Technical Qualifications : • Overall 10+ years in IT development/operations/support/migrations • Minimum of 5+ years of progressively growing responsibilities experience in Linux OS Engineering and automation, with firm understanding of two or more of the mainstream Linux distributions. • 3+ Years of experience with Tekton Pipelines for Automation • 5+ years of experience with Git repo mgmt. and usage. • 3+ years of creating/managing Ansible playbooks • 3+ years of PowerShell, Ansible & scripting • Infrastructure Experience with servers, network, storage, database, and backups. Additional Information : • Exceptional technical writing skills (run books, DR plans, platform architecture, etc.) • Ability to work with a variety of cross functional teams to support deployment to remote locations across the global. • Must a self-starter, capable of working independently and within a team with minimum supervision. • Must have good communication skills

Posted 1 week ago

Apply

4.0 - 6.0 years

20 - 25 Lacs

chennai

Work from Office

Position Description: Employees in this job function are responsible for designing, building, and maintaining data solutions including data infrastructure, pipelines, etc. for collecting, storing, processing and analyzing large volumes of data efficiently and accurately Key Responsibilities: 1) Collaborate with business and technology stakeholders to understand current and future data requirements 2) Design, build and maintain reliable, efficient and scalable data infrastructure for data collection, storage, transformation, and analysis 3) Plan, design, build and maintain scalable data solutions including data pipelines, data models, and applications for efficient and reliable data workflow 4) Design, implement and maintain existing and future data platforms like data warehouses, data lakes, data lakehouse etc. for structured and unstructured data 5) Design and develop analytical tools, algorithms, and programs to support data engineering activities like writing scripts and automating tasks 6) Ensure optimum performance and identify improvement opportunities

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Full Stack Data Engineer at our company, you will collaborate with Data Scientists and Product Development business partners to create cutting-edge data products that align with our Company Objectives. Your responsibilities will include landing data, developing new data products, enhancing existing ones, and collaborating with Analytics & Business partners to validate solutions for production release. You should have a Bachelor's Degree and at least 2+ years of experience in GCP services such as Big Query, Dataproc, Data Plex, DataFusion, Terraform, Tekton, Airflow, Cloud Storage, and Pub/Sub. Proficiency in Git or any other version control tool is required. While API knowledge is considered beneficial, it is not mandatory. Familiarity with Agile framework is a plus. A coding background with at least 2 years of experience in Python and SQL is preferred. Knowledge of Astro or Airflow is advantageous but not mandatory. A fundamental understanding of the Automotive industry and/or Product Development will be beneficial for this role. We are looking for candidates with a flexible mindset, the ability to adapt to change, collaborate effectively with business and analytics partners, and perform well under pressure. You must be capable of developing robust code within tight timelines to align with business requirements and global metrics. Key Skills Required for this role include Python, Dataproc, Airflow, PySpark, Cloud Storage, DBT, DataForm, NAS, Pubsub, Terraform, API, Big Query, Data Fusion, Google Cloud Platform (GCP), and Tekton.,

Posted 2 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

noida, uttar pradesh, india

On-site

Job Summary: We are seeking a highly experienced and strategic Senior ManagerCloud engineering to lead our Noida SRE and Cloud Engineering teams and drive the evolution of our infrastructure, CI/CD pipelines, and cloud operations. This role is ideal for a hands-on leader who thrives in a fast-paced environment and is passionate about automation, scalability, and reliability who can collaborate and communicate effectively. Key Responsibilities: Leadership & Strategy Lead and mentor a team of DevOps teams, fostering a culture of collaboration, innovation, and continuous improvement. Define and implement DevOps strategies aligned with business goals and engineering best practices. Collaborate with software engineering, QA, and product teams to ensure seamless integration and deployment. Infrastructure & Automation Oversee the design, implementation, and maintenance of scalable cloud infrastructure (AWS). Drive automation of infrastructure provisioning, configuration management, and deployment processes. Ensure high availability, performance, and security of production systems. CI/CD & Monitoring Architect and maintain robust CI/CD pipelines to support rapid development and deployment cycles. Implement monitoring, logging, and alerting systems to ensure system health and performance. Manage incident response and root cause analysis for production issues. Governance & Compliance Ensure compliance with security policies, data protection regulations, and industry standards. Develop and enforce operational best practices, including disaster recovery and business continuity planning. Qualifications: Bachelor's or Master's degree in Computer Science, Engineering, or related field. 8+ years of experience in DevOps, Site Reliability Engineering, or Infrastructure Engineering and understanding of best practices. 5+ years in a leadership or managerial role. Expertise in AWS and infrastructure-as-code tools (Terraform, CloudFormation). Strong experience with CI/CD tools (Jenkins, GitHub CI, Tekton) and container orchestration (Docker, Kubernetes). Proficiency in scripting languages (Python, Bash, Go, PowerShell). Excellent communication, problem-solving, and project management skills. Problem-solving mindset with a focus on continuous improvement. Preferred Qualifications: Certifications in cloud technologies (AWS Certified DevOps Engineer, etc.). Experience with security and compliance frameworks (SOC 2, ISO 27001). Experience with Agile methodologies and familiarity DevSecOps practices. Experience with managing .Net environments and Kubernetes clusters

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

chennai, tamil nadu

On-site

The ideal candidate for this position should have 3+ years of experience in full stack software development, along with expertise in Cloud technologies & services, preferably GCP. In addition, the candidate should possess at least 3 years of experience in practicing statistical methods such as ANOVA and principal component analysis. Proficiency in Python, SQL, and BQ is a must. The candidate should also have experience in tools like SonarQube, CICD, Tekton, terraform, GCS, GCP Looker, Google Cloud Build, Cloud Run, Vertex AI, Airflow, TensorFlow, etc. Experience in training, building, and deploying ML and DL models is an essential requirement for this role. Familiarity with HuggingFace, Chainlit, Streamlit, and React would be an added advantage. The position is based in Chennai and Bangalore. A minimum of 3 to 5 years of relevant experience is required for this role.,

Posted 2 weeks ago

Apply

8.0 - 10.0 years

0 Lacs

chennai, tamil nadu, india

On-site

JOB DESCRIPTION GDIA Mission and Scope- The Global Data Insights and Analytics (GDI&A) department at Ford Motor Company is looking for qualified people who can develop scalable solutions to complex real-world problems using Machine Learning, Big Data, Statistics, Econometrics, and Optimization. The goal of GDI&A is to drive evidence-based decision making by providing insights from data. Applications for GDI&A include, but are not limited to, Connected Vehicle, Smart Mobility, Advanced Operations, Manufacturing, Supply chain, Logistics, and Warranty Analytics. We are seeking a highly technical and experienced individual to fill the role of Tech Anchor/Solution Architect within our Industrial System Analytics (ISA) team. As a Tech Anchor, you will provide technical leadership and guidance to the development team, driving the design and implementation of cloud analytic solutions using GCP tools and techniques. RESPONSIBILITIES Key Roles and Responsibilities of Position: Provide technical guidance, mentorship, and code-level support to the development team Work with the team to develop and implement solutions using GCP tools (BigQuery, GCS, Dataflow, Dataproc, etc.) and APIs/Microservices Ensure adherence to security, legal, and Ford standard/policy compliance Drive effective and efficient delivery from the team, focusing on speed Identify risks and implement mitigation/contingency plans Assess the overall health of the product and raise key decisions Manage onboarding of new resources Lead the design and architecture of complex systems, ensuring scalability, reliability, and performance Participate in code reviews and contribute to improving code quality Champion Agile software processes, culture, best practices, and techniques Ensure effective usage of Rally and derive meaningful insights Ensure implementation of DevSecOps and software craftsmanship practices (CI/CD, TDD, Pair Programming) QUALIFICATIONS Qualifications: Bachelor's /Master's/ PhD in engineering, Computer Science, or in a related field Senior-level experience (8+ years) as a software engineer Deep and broad knowledge of: Programming Languages: Java, JavaScript, Python, SQL Front-End Technologies: React, Angular, HTML, CSS Back-End Technologies: Node.js, Python frameworks (Django, Flask), Java frameworks (Spring) Cloud Technologies: GCP (BigQuery, GCS, Dataflow, Dataproc, etc.) Deployment Practices: Docker, Kubernetes, CI/CD pipelines Experience with Agile software development methodologies Understanding/Exposure of: CI, CD, and Test-Driven Development (GitHub, Terraform/Tekton, 42Crunch, SonarQube, FOSSA, Checkmarx etc.)

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

chennai, tamil nadu, india

On-site

JOB DESCRIPTION The Strategy & Enterprise Analytics team, part of the Global Data Insight & Analytics (GDI&A) organization is looking for an experienced Software Engineer to develop and deliver innovative AI Assistants. As a key member of our team, you will collaborate with business partners in the Legal Ops Analytics and AI? areas to identify and implement new AI solutions to drive business results. We are looking for a software engineer with 5+ years of experience in building high impact software products, preferably in the domain of analytics and AI. You should be a humble and collaborative individual who thrives in a fast-paced environment and should be passionate about developing and delivering AI Assistants that drive business impact. RESPONSIBILITIES Lead the design, development, and implementation of innovative, scalable, and high-quality software solutions Drive technical strategy, architectural patterns, and best practices, ensuring alignment with company goals and long-term vision. Make high-level technical decisions, including technology selection, and influence organizational technical strategy. Create novel solutions and implement advanced architectural patterns, focusing on domain-driven design, clean architecture, event-driven patterns, caching, partitioning, latency, scalability, and availability. Provide strategic insights and recommendations to leadership, proactively identifying gaps and proposing solutions. Design, implement, and optimize systems for performance, security, privacy, and compliance, anticipating future requirements and building extensible solutions. Deliver business outcomes by building systems that meet Service Level Objectives (SLOs), implementing sophisticated testing strategies, and driving quality tool adoption. Lead and ensure high-quality code reviews, manage branching strategies, and promote Clean Coding practices. Improve developer productivity by automating manual steps in CI/CD and reducing feedback loops. Balance technical debt with business needs and collaborate effectively across teams and with leadership. QUALIFICATIONS Master's degree in Computer Science, Information Technology, Information Systems, Data Analytics, or a related field (or equivalent combination of education and experience). 5-7 years of experience in Data Engineering or Software Engineering, with at least 2 years of hands-on experience building and deploying cloud-based data platforms (GCP preferred). Strong proficiency in SQL, Java, and Python, with practical experience in designing and deploying cloud-based data pipelines using GCP services like BigQuery, Dataflow, and DataProc. Expertise in one or more widely used programming languages and technologies including Python, Java, JavaScript/TypeScript, HTML/CSS or Angular/React. Solid understanding of Service-Oriented Architecture (SOA) and microservices, and their application within a cloud data platform. Experience with relational databases (e.g., PostgreSQL, MySQL), NoSQL databases, and columnar databases (e.g., BigQuery). Knowledge of data governance frameworks, data encryption, and data masking techniques in cloud environments. Familiarity with CI/CD pipelines, Infrastructure as Code (IaC) tools like Terraform and Tekton, and other automation frameworks. Excellent analytical and problem-solving skills, with the ability to troubleshoot complex data platform and microservices issues. Experience in monitoring and optimizing cost and compute resources for processes in GCP technologies (e.g., BigQuery, Dataflow, Cloud Run, DataProc). A passion for data, innovation, and continuous learning. Proven ability to design resilient strategies, implement sophisticated release management, lead critical incident resolution, and conduct Root Cause Analyses (RCAs). Excellent oral, written, and interpersonal communication skills.

Posted 2 weeks ago

Apply

2.0 - 4.0 years

0 Lacs

bengaluru, karnataka, india

On-site

In the Age of AI, Cprime reshapes operating models and rewires workflows to deliver enterprise transformation. We are your Intelligent Orchestration Partner, combining strategic consulting with industry-leading platforms to drive innovation, enhance efficiency, and shift your enterprise toward AI native thinking. For over 20 years, we&aposve changed the way companies operate by transforming their people, processes, and technology, including partnering with 300 of the Fortune 500 companies. In this new era, Cprime helps companies unlock unprecedented speed and efficiency by embedding AI at the core of their business and infusing it into every function, process, and team. Must have skills : Infra Vulnerability Management and Kubernetes/Containers. What You Will Do Conduct vulnerability scans, analyze reports, and validate potential findings; contribute to process improvements; and document. Configure and manage vulnerability scanners for both VM and Container (Kubernetes) environments, including their integration into the clients software development lifecycle. Track and guide Vulnerability remediation efforts across the organization. Escalate issues and problems when needed. Coordinate PCI-DSS vulnerability scans, and support other compliance and risk management activities in the area of Vulnerability Management Must be able to interface and coordinate work efficiently and effectively with business colleagues and vendors in global locations and time zones Qualifications And Skills 2 - 3 years of demonstrated ability within information security vulnerability management, including the remediation process to address Operating System (Linux/Unix) vulnerabilities and misconfigurations. Experience with Kubernetes environments that include building, deploying, and supporting containerized images in Cloud environments. Experience with continuous delivery and integration (CI/CD) in Cloud and infrastructure engineering, and related tools (Jenkins/Tekton, Github etc.), and experience with programming or scripting languages such as Python/Go, or Bash/PowerShell. Self-starter with a bias towards action and can thrive in a fast-paced and ambiguous environment Desired Qualifications Experience with security vulnerability management tools is a plus (e.g. Tenable, Anchore). Knowledge of industry standard Risk scoring methodologies (CVSS, EPSS etc.) Experience with data analytics (querying, analysis, and visualization) solutions (Splunk, Hadoop etc.) is a plus Experience using ServiceNow, including features (related to Vulnerability Response and Orchestration) within ServiceNow, is highly preferred. What We Believe In At Cprime we believe in facilitating social justice action internally, in industry, and within our communities. We believe part of our mission is to expand the minds, hearts, and opportunities of our Cprime teammates and within the broader community to include those who have been historically marginalized. Equal Employment Opportunity Statement Cprime is an equal-opportunity employer that is committed to diversity and inclusion in the workplace. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, disability status, protected veteran status, or any other characteristic protected by federal, state, or local laws.? Show more Show less

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As an Architect Consultant at TekWissen, a global workforce management provider, you will play a crucial role in providing technical leadership and architectural strategy for enterprise-scale data, analytics, and cloud initiatives. Your responsibilities will include partnering with business and product teams to design scalable, secure, and high-performing solutions that align with enterprise architecture standards and business goals. Additionally, you will assist GDIA teams in architecting new and existing applications using Cloud architecture patterns and processes. In this role based in Chennai, you will collaborate with product teams to define, assemble, and integrate components according to client standards and business requirements. You will support the product team in developing technical designs and documentation, participate in proof of concepts, and contribute to the product solution evaluation processes. Your expertise will be crucial in providing architecture guidance, technical design leadership, and demonstrating the ability to work on multiple projects simultaneously. The required skills for this position include proficiency in GCP, Cloud Architecture, API, Enterprise Architecture, Solution Architecture, CI-CD, and Data/Analytics. Preferred skills include experience with Big Query, Java, React, Python, LLM, Angular, GCS, GCP Cloud Run, Vertex, Tekton, TERRAFORM, and strong problem-solving abilities. To excel in this role, you should have direct hands-on experience in Google Cloud Platform Architecture, a strong grasp of enterprise integration patterns, security architecture, and DevOps practices. Your demonstrated ability to lead complex technical initiatives and influence stakeholders across business and IT will be critical to your success. The ideal candidate for this position should possess a Bachelor's Degree and demonstrate a commitment to workforce diversity. Join TekWissen Group as an Architect Consultant and contribute to making the world a better place through innovative technological solutions.,

Posted 2 weeks ago

Apply

8.0 - 13.0 years

0 Lacs

pune, maharashtra

On-site

We are looking for a motivated and experienced Automation Tester to join our dynamic Commodities Technology team. In this role, you will contribute to the design, development, and execution of UI automation test suites. You will work collaboratively with developers, business analysts, and other stakeholders to ensure the delivery of high-quality software. Responsibilities: Develop and maintain automation scripts within established frameworks for UI testing. Contribute to test plans and create test cases for functional and end-to-end testing. Execute automated tests and perform preliminary analysis of results. Report defects and track them through the resolution process. Assist with manual testing as needed. Collaborate with cross-functional and global teams (QA, Dev and Product Teams). Skills: 8-13 years of hands-on experience in QA automation. Programming skills in JavaScript/Typescript. Experience in any of the UI automation frameworks such as Selenium, Cypress, or Playwright. Familiarity with BDD concepts and Cucumber. Understanding of the Page Object Model (POM) design pattern. Experience using Git for version control. Analytical and troubleshooting skills. Ability to work effectively in a team environment. Good verbal and written communication skills. Preferred Skills: Familiarity with any of the CI/CD tools (Jenkins, Tekton, Harness, or Kubernetes). Financial markets domain knowledge. Experience with JIRA, Zephyr, and MongoDB. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity, review Accessibility at Citi. View Citi's EEO Policy Statement and the Know Your Rights poster.,

Posted 2 weeks ago

Apply

0.0 years

0 Lacs

pune, maharashtra, india

On-site

Job description Some careers shine brighter than others. If you're looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Kubernetes Platform Engineer/Consultant Specialist. In this role, you will: . Build and manage the HSBC GKE Kubernetes Platform to easily let application teams deploy to Kubernetes. Mentor and guide support engineers, represent the platform technically through talks, blog posts and discussions . Engineer solutions on HSBC GKE Kubernetes Platform using Coding, Automation and Infrastructure as Code methods (e.g. Python, Tekton, Flux, Helm, Terraform, ). Manage a fleet of GKE clusters from a centrally provided solution . Ensure compliance with centrally defined security controls and with operational risk standards (E.g. Network, Firewall, OS, Logging, Monitoring, Availability, Resiliency and Containers). Ensure good Change management practice is implemented as specified by central standards. Provide impact assessments where requested for changes proposed on HSBC GCP core platform. . Build and support continuous integration (CI), continuous delivery (CD) and continuous testing activities. Engineering activities to implement patches for VMs and containers provided centrally . Support non-functional testing . Update support and operational documentation as required . Fault find and support Applications teams . On a rotational on call basis provide out of business hours support as part of our 24 x 7 coverage Requirements To be successful in this role, you should meet the following requirements: . Demonstrable Kubernetes and Cloud Native experience - building, configuring and extending Kubernetes platforms . Automation scripting (using scripting languages such as Terraform, Python etc.) . Experience of working with Continuous Integration (CI), Continuous Delivery (CD) and continuous testing tools . Experience of working with Kubernetes resource configuration tooling (Helm, Kustomize, kpt) . Experience working within an Agile environment . Programming experience in one or more of the following languages: Python or Go . Ability to quickly acquire new skills and tools You'll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by - HSBC Software Development India

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Global Workforce Management provider, TekWissen is dedicated to making the world a better place through modern transportation and community-focused initiatives. We are currently looking for a talented individual to join our team as an OpenShift Virtualization Engineer / Cloud Platform Engineer in Chennai. This position offers a hybrid work environment and the opportunity to work with a dynamic team on our on-premises cloud platform. In this role, you will be responsible for designing, implementing, administering, and managing our OpenShift Virtualization (KubeVirt) service. Your expertise in Kubernetes, OpenShift, and virtualization technologies will be essential in ensuring the stability, performance, and security of our virtualized workloads. Day-to-day responsibilities will include proactive monitoring, troubleshooting, and maintaining operational excellence. Key responsibilities of this role include: - Observability, monitoring, logging, and troubleshooting to ensure the stability of the OSV environment - Implementing end-to-end observability solutions and exploring Event Driven Architecture for real-time monitoring - Providing technical consulting and expertise to application teams requiring OSV solutions - Developing and maintaining internal documentation and customer-facing content on platform capabilities The ideal candidate will have experience in Kubernetes, OpenShift, deployment, services, storage capacity management, Linux, network protocols, Python scripting, Dynatrace, VMware, problem-solving, technical troubleshooting, communication, Red Hat, Terraform, Tekton, Ansible, GitHub, GCP, AWS, Azure, cloud infrastructure, CI/CD, and DevOps. A Bachelor's degree in Computer Science, Information Technology, or a related field, along with 5+ years of IT infrastructure experience and 2+ years focused on Kubernetes and/or OpenShift, is required. Preferred qualifications include certifications in OpenShift Virtualization, experience with IaC tools, familiarity with SDN and SDS solutions, knowledge of public cloud providers, and CI/CD pipelines. A Master's degree is preferred but not required. If you are a proactive problem solver with strong communication and collaboration skills, we encourage you to apply for this exciting opportunity to join our globally positioned team at TekWissen.,

Posted 3 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Senior Software Engineer (TypeScript Developer) Position Overview Job Title: Senior Software Engineer (TypeScript Developer) Corporate Title: AVP Location: Bangalore, India Role Description You will be joining the TDI Engineering Platforms and Practice group as a full stack developer working on our target state secure pipelines and control automation stack. The pipeline is a key component in providing a frictionless software delivery experience for our customers and will be used by the entire organization. You will be responsible for designing, building and supporting a variety of automation including GitHub Actions and Workflows and backend process (Java/TypeScript) ensuring the highest standards of compliance without hindering the pace of delivery of our customer teams. This is a rare opportunity to help shape the future technology and culture of our firm. What we'll offer you As part of our flexible scheme, here are just some of the benefits that you'll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Building secure and reusable CICD components to provide provenance and governance around our SDLC practice ensuing high quality compliance Integrate with existing developer tooling to gather information and automate Ensuring the highest standards in security and supply chain integrity in-line with NIST, SLSA and other standards Direct customer engagement to gather requirements and understand the disparate ways teams build software today Developing supporting materials (software, training materials, workshops) to facilitate adoption Continuously measure the success of our solutions via a data driven approach, feedback and continuous improvement Your skills and experience Experienced full stack developer (Java/JVM/TypeScript), likely 5+ years in industry Extensive DevOps experience including CICD, SLI/SLOs, error budgets et al Extensive automation experience including GitHub Actions, TFE, scripting such as Ansible or similar Experience of varied orchestration technologies such as TeamCity, Jenkins and Cloud ready tools like ArgoCD and Tekton a plus Cloud (K8s and/or GCP) expertise - training can be provided Understanding of security concerns and frameworks such as SLSA and ensuring provenance of the SBOM a plus Proven communication and influencing skills, experience coaching and mentoring a plus How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: We strive for a in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 3 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Senior Software Engineer (TypeScript Developer) Position Overview Job Title: Senior Software Engineer (TypeScript Developer) Corporate Title: AVP Location: Bangalore, India Role Description You will be joining the TDI Engineering Platforms and Practice group as a full stack developer working on our target state secure pipelines and control automation stack. The pipeline is a key component in providing a frictionless software delivery experience for our customers and will be used by the entire organization. You will be responsible for designing, building and supporting a variety of automation including GitHub Actions and Workflows and backend process (Java/TypeScript) ensuring the highest standards of compliance without hindering the pace of delivery of our customer teams. This is a rare opportunity to help shape the future technology and culture of our firm. What we'll offer you As part of our flexible scheme, here are just some of the benefits that you'll enjoy Best in class leave policy Gender neutral parental leaves 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Employee Assistance Program for you and your family members Comprehensive Hospitalization Insurance for you and your dependents Accident and Term life Insurance Complementary Health screening for 35 yrs. and above Your key responsibilities Building secure and reusable CICD components to provide provenance and governance around our SDLC practice ensuing high quality compliance Integrate with existing developer tooling to gather information and automate Ensuring the highest standards in security and supply chain integrity in-line with NIST, SLSA and other standards Direct customer engagement to gather requirements and understand the disparate ways teams build software today Developing supporting materials (software, training materials, workshops) to facilitate adoption Continuously measure the success of our solutions via a data driven approach, feedback and continuous improvement Your skills and experience Experienced full stack developer (Java/JVM/TypeScript), likely 5+ years in industry Extensive DevOps experience including CICD, SLI/SLOs, error budgets et al Extensive automation experience including GitHub Actions, TFE, scripting such as Ansible or similar Experience of varied orchestration technologies such as TeamCity, Jenkins and Cloud ready tools like ArgoCD and Tekton a plus Cloud (K8s and/or GCP) expertise - training can be provided Understanding of security concerns and frameworks such as SLSA and ensuring provenance of the SBOM a plus Proven communication and influencing skills, experience coaching and mentoring a plus How we'll support you Training and development to help you excel in your career Coaching and support from experts in your team A culture of continuous learning to aid progression A range of flexible benefits that you can tailor to suit your needs About us and our teams Please visit our company website for further information: We strive for a in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 3 weeks ago

Apply

0.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Description Join our team focused on Google Cloud Data Messaging Services, leveraging technologies like Pub/Sub and Kafka to build scalable, decoupled, and resilient cloud-native applications. This position involves close collaboration with development teams, as well as product vendors, to implement and support the suite of Data Messaging Services offered within GCP and Confluent Kafka. GCP Data Messaging Services provide powerful capabilities for handling streaming data and asynchronous communication. Key benefits include: Enabling real-time data processing and event-driven architectures Decoupling applications for improved resilience and scalability Leveraging managed services like Cloud Pub/Sub and integrating with Kafka environments (Apache Kafka, Confluent Cloud) Providing highly scalable and available infrastructure for data streams Enhancing automation for messaging setup and management Supporting Infrastructure as Code practices for messaging components The Data Messaging Services Specialist plays a crucial role as the corporation migrates and onboards applications that rely on robust data streaming and asynchronous communication onto GCP Pub/Sub and Confluent Kafka. This position requires staying abreast of the continual evolution of cloud data technologies and understanding how GCP messaging services like Pub/Sub, alongside Kafka, integrate with other native services like Cloud Run, Dataflow, etc., within the new Ford Standard app hosting environment to meet customer needs. This is an exciting opportunity to work on highly visible data streaming technologies that are becoming industry standards for real-time data processing. Responsibilities Develop a solid understanding of Google Cloud Pub/Sub and Kafka (Apache Kafka and/or Confluent Cloud). Gain experience in using Git/GitHub and CI/CD pipelines for deploying messaging-related cluster and infrastructure. Collaborate with Business IT and business owners to prioritize improvement efforts related to data messaging patterns and infrastructure. Work with team members to establish best practices for designing, implementing, and operating scalable and reliable data messaging solutions. Identify opportunities for adopting new data streaming technologies and patterns to solve existing needs and anticipate future challenges. Create and maintain Terraform modules and documentation for provisioning and managing Pub/Sub topics/subscriptions, Kafka clusters, and related networking configurations, often with a paired partner. Develop automated processes to simplify the experience for application teams adopting Pub/Sub and Kafka client libraries and deployment patterns. Improve continuous integration tooling by automating manual processes within the delivery pipeline for messaging applications and enhancing quality gates based on past learnings. Qualifications Highly motivated individual with strong technical skills and an understanding of emerging data streaming technologies (including Google Pub/Sub, Kafka, Tekton, and Terraform). Experience with Apache Kafka or Confluent Cloud Kafka, including concepts like brokers, topics, partitions, producers, consumers, and consumer groups. Working experience in CI/CD pipelines, including building continuous integration and deployment pipelines using Tekton or similar technologies for applications interacting with Pub/Sub or Kafka. Understanding of GitOps and other DevOps processes and principles as applied to managing messaging infrastructure and application deployments. Understanding of Google Identity and Access Management (IAM) concepts and various authentication/authorization options for securing access to Pub/Sub and Kafka. Knowledge of any programming language (e.g., Java, Python, Go) commonly used for developing messaging producers/consumers. Experience with public cloud platforms (preferably GCP), with a focus on data messaging services. Understanding of agile methodologies and concepts, or experience working in an agile environment. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

chennai, tamil nadu, india

On-site

Job Description The minimum requirements we seek: 5+ years experience in Software Engineering. Bachelors degree in computer science, computer engineering or a combination of education and equivalent experience. Willingness to collaborate daily with team members. A strong curiosity around how to best use technology to amaze and delight our customers 2+ year experience with developing for and deploying to GCP cloud platforms Experience in development in at least some from each following categories: Languages: Java / Kotlin / JS / TS / Python / Other Frontend frameworks: Angular / React / Vue / Other Backend frameworks: Spring / Node / Other Proven experience understanding, practicing, and advocating for software engineering disciplines from eXtreme Programming (XP), Clean Code, Software Craftmanship, and Lean including: Paired / Extreme programming Test-first/Test Driven Development (TDD) Evolutionary design Minimum Viable Product FOSSA, SofarQube,42Crunch, etc., Responsibilities The Software Engineer will be responsible for the development and ongoing support/maintenance of the analytic solutions. Product And Requirements Management: Participate in and/or lead the development of requirements, features, user stories, use cases, and test cases. Participate in stand-up operations meetings. Author: Process and Design Documents Design/Develop/Test/Deploy: Work with the Business Customer, Product Owner, Architects, Product Designer, Software Engineers, and Security Controls Champion on solution design, development, and deployment. Operations: Generate Metrics, Perform User Access Authorization, Perform Password Maintenance, and Build Deployment Pipelines. Incident, Problem, And Change/Service Requests: Participate and/or lead incident, problem, change and service request-related activities. Includes root cause analysis (RCA). Includes proactive problem management/defect prevention activities. Qualifications Our preferred qualifications: Highly effective in working with other technical experts, Product Managers, UI/UX Designers and business stakeholders Delivered products that include web front-end development; JavaScript, CSS, frameworks like Angular, React etc. Comfortable with Continuous Integration/Continuous Delivery tools and pipelines e.g. Tekton, Terraform Jenkins, Cloud Build, etc. Experience with machine learning, mathematical modeling, and data analysis is a plus Experience with CA Agile Central (Rally), backlogs, iterations, user stories, or similar Agile Tools Experience in the development of microservices Understanding of fundamental data modeling Strong analytical and problem-solving skills Show more Show less

Posted 3 weeks ago

Apply

8.0 - 12.0 years

11 - 15 Lacs

pune

Work from Office

Job Summary: We are seeking a highly experienced Senior Azure DevOps Engineer with 8+ years of proven experience in designing, implementing, and managing Azure infrastructure and DevOps pipelines. The ideal candidate will have extensive hands-on expertise in Terraform, YAML deployments, Helm charts, and Azure services, and will play a key role in migrating workloads from AWS to Azure. Strong troubleshooting skills, deep knowledge of Azure infrastructure, and the ability to work collaboratively across teams are essential. Key Responsibilities: Design, develop, and manage Infrastructure as Code (IaC) using Terraform for provisioning Azure services. Implement and maintain CI/CD pipelines using Azure DevOps, GitHub Actions, Argo CD, and Bamboo. Deploy applications and infrastructure using YAML, Helm charts, and native Azure deployment tools. Provide technical leadership in migrating workloads from AWS to Azure, ensuring optimal performance and security. Manage and support containerized applications using Kubernetes and Helm in Azure environments. Design robust, scalable, and secure Azure infrastructure solutions (compute, storage, network, database, and monitoring). Troubleshoot deployment, integration, and infrastructure issues across cloud environments. Collaborate with cross-functional teams to deliver infrastructure and DevOps solutions aligned with project goals. Support monitoring and performance optimization using Azure Monitor and other tools. Required Qualifications & Skills: 8+ years of hands-on experience in Azure infrastructure and DevOps engineering. Deep expertise in Terraform, YAML, and Azure CLI/ARM templates. Strong hands-on experience with core Azure services: compute, networking, storage, app services, etc. Experience with CI/CD tools such as GitHub Actions, Azure DevOps, Bamboo, and Argo CD. Proficient in managing and deploying applications using Helm charts and Kubernetes. Proven experience in migrating cloud workloads from AWS to Azure. Strong knowledge of Azure IaaS/PaaS, containerization, and DevOps best practices. Excellent troubleshooting and debugging skills across build, deployment, and infrastructure pipelines. Strong verbal and written communication skills for collaboration and documentation. Preferred Certifications (Nice to Have): AZ-400 Designing and Implementing Microsoft DevOps Solutions HashiCorp Certified: Terraform Associate Good to Have Skills: Experience with Argo CD, Bamboo, Tekton, and other CI/CD tools. Familiarity with AWS services to support migration projects.

Posted 3 weeks ago

Apply
Page 1 of 4
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies