Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
10 - 15 Lacs
Mysuru
Work from Office
The Site Reliability Engineer is a critical role in Cloud based projects. An SRE works with the development squads to build platform & infrastructure management/provisioning automation and service monitoring using the same methods used in software development to support application development. SREs create a bridge between development and operations by applying a software engineering mindset to system administration topics. They split their time between operations/on-call duties and developing systems and software that help increase site reliability and performance Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Overall 12+ yrs experience required. Have good exposure to Operational aspects (Monitoring, Automation, Remediations) - Monitoring tools exposure like New Relic, Prometheus, ELK, Distributed tracing, APM, App Dynamics, etc. Troubleshooting and documenting Root cause analysis and automate the incident Understands the Architecture, SRE mindset, Understands data model Platform Architecture and Engineering - Ability to design, architect a Cloud platform that can meet Client SLAs /NFRs such as Availability, system performance etc. SRE will define the environment provisions framework, identify potential performance bottlenecks and design a cloud platform Preferred technical and professional experience Effectively communicate with business and technical team members. Creative problem solving skills and superb communication Skill. Telecom domain experience is an added plus
Posted 1 week ago
0.0 years
5 - 7 Lacs
Noida, New Delhi, Gurugram
Work from Office
Role & responsibilities Roles and Responsibilities: * Design, implement, and maintain CI/CD pipelines utilizing AzureDevOps. * Develop infrastructure as code (laC) using Terraform for deployment and configuration management in Azure. * Monitor and enhance application and infrastructure security within Azure environments. * Enable automated testing using AzureDevOps and SonarQube for code quality management. * Collaborate with development and operations teams to streamline and automate workflows. * Troubleshoot and resolve issues in development, test, and production environments. * Continuously evaluate and implement improvements to optimize performance, scalability, and efficiency. Qualification: * Proven experience with Azure DevOps for CI/CD pipelines. * Strong proficiency in Terraform for infrastructure provisioning and management in Azure. * In-depth knowledge of Azure services (VMs, App Services, Storage, etc.). * Experience integrating and configuring SonarQube for code quality assessment. * Proficiency in scripting languages like PowerShell, YAML, Python, and shell scripting. * Solid understanding of DevOps best practices and methodologies. * Ability to troubleshoot complex issues and provide effective solutions. * Excellent communication and collaboration skills, with the ability to work effectively in a team environment. Preferred candidate profile * Experience with containerization technologies (Docker, Kubernetes). * Familiarity with monitoring tools (e.g., Prometheus, Grafana). * Knowledge of agile development methodologies. Certification in Azure (e.g., Azure Administrator Associate, Azure DevOps Engineer Expert).(Optional)
Posted 1 week ago
6.0 - 11.0 years
8 - 12 Lacs
Mumbai, Pune, Chennai
Work from Office
Project description As the Senior REACT Developer, you will lead the modernization and architecture of a web platform. You will be part of a team that designs, develops and launches efficient quality systems and solutions in support of core organizational functions. This individual will apply proven communication, analytical and problem-solving skills to help identify, communicate and resolve issues, opportunities or problems in order to maximize the benefit of IT and Business investments. The Developer is experienced and self - sufficient in performing his/her responsibilities requiring little supervision, but general guidance and direction. Responsibilities Lead the design, development, and planning for the software architecture for United for Business Web platform Solve complex performance problems and architectural challenges Perform code reviews and mentor your peers Serve as an integral member of the development team to create practical solutions in Agile/DevOps environment Ensure consistency with an established software development architecture Analyze and interpret requirements from Business and UX Design Team Introduce new technologies and best practices as needed to solve business problems Help to troubleshoot, test, and maintain the quality and security of the platform Ensure the technical feasibility of UI/UX designs with a focus on accessibility Work in an Agile environment Skills Must have BS/BA, preferably in a technical or scientific field or equivalent experience, education or training 6+ years of experience in application design, development, installation and modification of web applications 3+ years of experience in developing in JavaScript with React v16 & 17, Redux, Sagas, Webpack and ES6 or equivalent experience Familiar with UI testing frameworks like Jest and Enzyme and experience with TDD (test driven development) Advance knowledge of development methodologies, software design and design patterns, integration standards as well as its applicability at coding and testing cycles Advance knowledge of software engineering best practices such as: versioning and versioning control, software packaging and software release management using GitHub Effective Communication (verbal + written) Excel at triage or analysis of situations for production support Proficient with on time delivery with minimal supervision Experience developing digital products that comply with accessibility standards (ADA/WCAG) Nice to have OOO Experience in C++, C#, or, Java HTML, Java Script, CSS Git/GitHub Code Repositories TeamCity or equivalent Configuration tools Dev Ops Experience using AWS tools Cloud technologies AWS including CDN UI Analytics (Google Analytics, Quantum Metrics) App Performing Tools (Datadog, Grafana) Mobile Web Technologies Exposure to Couchbase NoSQL DB and/or Dynamo Location: Pune,Mumbai,Chennai,banagalore
Posted 1 week ago
2.0 - 4.0 years
7 - 11 Lacs
Jaipur
Work from Office
Position Overview We are seeking a skilled Data Engineer with 2-4 years of experience to design, build, and maintain scalable data pipelines and infrastructure. You will work with modern data technologies to enable data-driven decision making across the organisation. Key Responsibilities Design and implement ETL/ELT pipelines using Apache Spark and orchestration tools (Airflow/Dagster). Build and optimize data models on Snowflake and cloud platforms. Collaborate with analytics teams to deliver reliable data for reporting and ML initiatives. Monitor pipeline performance, troubleshoot data quality issues, and implement testing frameworks. Contribute to data architecture decisions and work with cross-functional teams to deliver quality data solutions. Required Skills & Experience 2-4 years of experience in data engineering or related field Strong proficiency with Snowflake including data modeling, performance optimisation, and cost management Hands-on experience building data pipelines with Apache Spark (PySpark) Experience with workflow orchestration tools (Airflow, Dagster, or similar) Proficiency with dbt for data transformation, modeling, and testing Proficiency in Python and SQL for data processing and analysis Experience with cloud platforms (AWS, Azure, or GCP) and their data services Understanding of data warehouse concepts, dimensional modeling, and data lake architectures Preferred Qualifications Experience with infrastructure as code tools (Terraform, CloudFormation) Knowledge of streaming technologies (Kafka, Kinesis, Pub/Sub) Familiarity with containerisation (Docker, Kubernetes) Experience with data quality frameworks and monitoring tools Understanding of CI/CD practices for data pipelines Knowledge of data catalog and governance tools Advanced dbt features including macros, packages, and documentation Experience with table format technologies (Apache Iceberg, Apache Hudi) Technical Environment Data Warehouse: Snowflake Processing: Apache Spark, Python, SQL Orchestration: Airflow/Dagster Transformation: dbt Cloud: AWS/Azure/GCP Version Control: Git Monitoring: DataDog, Grafana, or similar
Posted 1 week ago
5.0 - 8.0 years
14 - 18 Lacs
Hyderabad
Work from Office
The Role We are looking for a skilled DevOps Engineer with a good background in Python, GitHub administration, and Artifactory management. The ideal candidate needs to have good knowledge of CI/CD pipeline best practices, artifact storage management, GitHub administration, and proficient coding skills in Python. What youll bring Understand business and product needs and manage our global GitHub instance serving all our product engineering teams. Design, build, and execute on artifact storage strategy in a scalable and efficient manner. Communicate and collaborate with engineering/cross-functional teams to implement a feedback mechanism to optimize Artifactory usage. Design, build, and maintain complex python applications. What you will need : Bachelors Degree in Engineering or equivalent, with 5-8 years of experience in managing CI/CD pipelines, source code repositories, artifact storage management, and software development. Enthusiastic learner and skilled in both theory and practice of building and maintaining Python applications in the Django framework. Experience managing Artifactory and GitHub Enterprise cloud applications across multiple large engineering teams. Work closely with data engineers, software developers, and other stakeholders to integrate the solutions into existing systems with systemic feedback and continuous training and optimization. Technical Skills: Mastery of Python programming language. Proficient with Linux administration skills, especially with command line. Proficient with containerization technologies.
Posted 1 week ago
6.0 - 8.0 years
12 - 16 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Design, implement, and manage scalable and highly available cloud infrastructure on AWS or GCP. Containerize applications using Docker, and manage orchestration with Kubernetes. Collaborate with developers and QA teams to integrate CI/CD pipelines and automate deployment processes. Ensure system reliability, uptime, and performance by leveraging industry-leading monitoring tools such as Grafana, Dynatrace, etc. Troubleshoot system failures, conduct root cause analysis, and provide long-term solutions to prevent recurrence. Script and automate operational tasks using Python or Java to improve system efficiency. Maintain documentation of system architecture, procedures, and configurations. Participate in incident response and on-call support rotation if required. Required Skills & Qualifications Minimum 5 years of hands-on experience in a DevOps/SRE role. Strong expertise in AWS or Google Cloud Platform (GCP). Deep understanding and practical experience with Docker and Kubernetes in production environments. Proficient in Java or Python for scripting, automation, and integrations. Experience with monitoring tools such as Grafana, Dynatrace, Prometheus, etc. Strong problem-solving skills and ability to work in a fast-paced environment. Excellent communication and documentation skills. Preferred Attributes Prior experience in large-scale enterprise systems. Ability to work independently and take ownership of DevOps processes. Exposure to Agile/Scrum methodologies. Location : Hyderabad / Bangalore / Trivandrum / Pune
Posted 1 week ago
3.0 - 8.0 years
10 - 18 Lacs
Mumbai
Work from Office
We are looking for an experienced DevOps Enginee r to join our infrastructure and platform team. You will play a key role in designing, implementing, and maintaining our CI/CD pipelines, automating infrastructure, ensuring system reliability, and improving overall developer productivity. The ideal candidate is well-versed in On-Prem, cloud platforms, infrastructure as code, and modern DevOps practices. Role & responsibilities Design, build, and maintain CI/CD pipelines using tools like Jenkins, GitLab CI. Automate infrastructure provisioning and configuration using Terraform, Ansible, or CloudFormation. Manage and monitor production and staging environments across On-Prem and cloud platform (AWS). Implement containerization and orchestration using Docker and Kubernetes. Ensure system availability, scalability, and performance via monitoring, logging, and alerting tools (e.g., Prometheus, Grafana, ELK, Datadog). Maintain and improve infrastructure security, compliance, and cost optimization. Collaborate with development, QA, and security teams to streamline code deployment and feedback loops. Participate in on-call rotations and troubleshoot production incidents. Write clear and maintainable documentation for infrastructure, deployments, and processes. Preferred candidate profile 315 years of experience in DevOps, SRE, or infrastructure engineering. Proficiency in scripting languages like Bash, Python, or Go. Strong hands-on experience with cloud platforms (preferably AWS). Deep understanding of Docker and Kubernetes ecosystem. Experience with infrastructure automation tools such as Ansible, Terraform or Chef. Familiarity with source control (Git), branching strategies, and code review practices. Solid experience with Linux administration, system performance tuning, and troubleshooting. Knowledge of networking concepts, load balancers, VPNs, DNS, and firewalls. Experience with monitoring/logging tools like Prometheus, Grafana, ELK, Splunk, or Datadog, Nagios, Log Shippers like Filebeat ,Fluentd, Fluent Bit. Familiarity with security tools like Vault, AWS IAM, or cloud workload protection. Experience in high-availability, multi-region architecture design. Strong understanding in creation of RPM packages and Yum Repos. Strong ubnderstanding of Jmeter scripting and test case writing. Strong understanding of Artifact repository Manager (JFROG,Nexus,Maven,NPM,NVM) Installation of open source / enterprise tools from Source file or RPM Packages. Strong understanding of tech stack ( Redis, Mysql, Nginx, RabbitMQ, Tomcat, Apache, JBOSS) Implement cloud-native solutions including load balancers, VPCs, IAM, AutoScaling Group, CDNs, S3,Route 53 etc. SAST tools like SonarQube, CheckMarks, JFrog X-Ray. Expertise in configuring , upgrading the API Gateway Preferbly ( Google Apigee, Kong ) etc.
Posted 1 week ago
5.0 - 10.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Description: GlobalLogic Requirements: 1. 5-9 years of very good hands on experience as Python tools developer 2. Good hold on Python scripting 3. SDLC, SDET 4. Good communication skills 5. Education background - BE BTech ME MTech MCA 6. Readiness to work from office. Job Responsibilities: 1. New tools development 2. Maintenance of already available tools 3. Collaboration with cross functional team 4. Regular Interaction with customer 5. Self motivated and problem solving skills What We Offer: Exciting Projects: We focus on industries like High-Tech, communication, media, healthcare, retail and telecom. Our customer list is full of fantastic global brands and leaders who love what we build for them. Collaborative Environment: You Can expand your skills by collaborating with a diverse team of highly talented people in an open, laidback environment — or even abroad in one of our global centers or client facilities! Work-Life Balance: GlobalLogic prioritizes work-life balance, which is why we offer flexible work schedules, opportunities to work from home, and paid time off and holidays. Professional Development: Our dedicated Learning & Development team regularly organizes Communication skills training(GL Vantage, Toast Master),Stress Management program, professional certifications, and technical and soft skill trainings. Excellent Benefits: We provide our employees with competitive salaries, family medical insurance, Group Term Life Insurance, Group Personal Accident Insurance , NPS(National Pension Scheme ), Periodic health awareness program, extended maternity leave, annual performance bonuses, and referral bonuses. Fun Perks: We want you to love where you work, which is why we host sports events, cultural activities, offer food on subsidies rates, Corporate parties. Our vibrant offices also include dedicated GL Zones, rooftop decks and GL Club where you can drink coffee or tea with your colleagues over a game of table and offer discounts for popular stores and restaurants!
Posted 1 week ago
5.0 - 10.0 years
7 - 11 Lacs
Hyderabad
Work from Office
About The Role As a Senior Backend Engineer you will develop reliable, secure, and performant APIs that apply Kenshos AI capabilities to specific customer workflows. You will collaborate with colleagues from Product, Machine Learning, Infrastructure, and Design, as well as with other engineers within Applications. You have a demonstrated capacity for depth, and are comfortable working with a broad range of technologies. Your verbal and written communication is proactive, efficient, and inclusive of your geographically-distributed colleagues. You are a thoughtful, deliberate technologist and share your knowledge generously. Equivalent to Grade 11 Role (Internal) You will: Design, develop, test, document, deploy, maintain, and improve software Manage individual project priorities, deadlines, and deliverables Work with key stakeholders to develop system architectures, API specifications, implementation requirements, and complexity estimates Test assumptions through instrumentation and prototyping Promote ongoing technical development through code reviews, knowledge sharing, and mentorship Optimize Application Scaling: Efficiently scale ML applications to maximize compute resource utilization and meet high customer demand. Address Technical Debt: Proactively identify and propose solutions to reduce technical debt within the tech stack. Enhance User Experiences: Collaborate with Product and Design teams to develop ML-based solutions that enhance user experiences and align with business goals. Ensure API security and data privacy by implementing best practices and compliance measures. Monitor and analyze API performance and reliability, making data-driven decisions to improve system health. Contribute to architectural discussions and decisions, ensuring scalability, maintainability, and performance of the backend systems. Qualifications At least 5+ years of direct experience developing customer-facing APIs within a team Thoughtful and efficient communication skills (both verbal and written) Experience developing RESTful APIs using a variety of tools Experience turning abstract business requirements into concrete technical plans Experience working across many stages of the software development lifecycle Sound reasoning about the behavior and performance of loosely-coupled systems Proficiency with algorithms (including time and space complexity analysis), data structures, and software architecture At least one domain of demonstrable technical depth Familiarity with CI/CD practices and tools to streamline deployment processes. Experience with containerization technologies for application deployment and orchestration. Technologies We Love Python, Django, FastAPI mypy, OpenAPI RabbitMQ, Celery, Distributed messaging system OpenSearch, PostgreSQL, Redis Git, Jsonnet, Jenkins, Containerization technology , Container orchestration platform Airflow, AWS, Terraform Grafana, Prometheus ML Libraries: PyTorch, Scikit-learn, Pandas
Posted 1 week ago
10.0 - 20.0 years
10 - 18 Lacs
Hyderabad, Chennai, Bengaluru
Hybrid
Google Cloud Certification Associate Cloud Engineer or Professional Cloud Architect/Engineer Hands-on experience with GCP services (Compute Engine, GKE, Cloud SQL, BigQuery, etc.) Strong command of Linux , shell scripting , and networking fundamentals Proficiency in Terraform , Cloud Build , Cloud Functions , or other GCP-native tools Experience with containers and orchestration – Docker, Kubernetes (GKE) Familiarity with monitoring/logging – Cloud Monitoring , Prometheus , Grafana Understanding of IAM , VPCs , firewall rules , service accounts , and Cloud Identity Excellent written and verbal communication skills.
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Engineer for Smart Operations (Global) at Linde, you will play a crucial role in leading the design, development, and maintenance of enterprise-level data architecture and engineering solutions. Your primary responsibility will be to create scalable, secure, and efficient data access across Lindes global operations. By constructing robust data platforms and pipelines, you will directly contribute to the development and deployment of AI products, fostering innovation, automation, and driving business value across diverse functions and geographies. At Linde, we value our employees and offer a range of benefits to ensure a comfortable and enjoyable workplace environment. These benefits include loyalty offers, annual leave, an on-site eatery, employee resource groups, and supportive teams that foster a sense of community. We are committed to creating a positive work experience for all our employees. Every day at Linde presents an opportunity for learning, growth, and contributing to one of the world's leading industrial gas and engineering companies. Embrace this opportunity by taking your next step with us and joining our team. Linde values diversity and inclusion in the workplace, recognizing the importance of fostering a supportive work environment. We believe that our success is driven by the diverse perspectives of our employees, customers, and global markets. As an employer of choice, we strive to support employee growth, embrace new ideas, and respect individual differences. As a Data Engineer at Linde, your responsibilities will include designing and leading scalable data architectures, developing unified data platforms, building robust data pipelines, leveraging modern data engineering stacks, automating workflows, maintaining CI/CD pipelines, collaborating with stakeholders and IT teams, and continuously improving systems with the latest data technologies. To excel in this role, you should possess a Bachelor's degree in computer science or related Engineering areas, along with 3+ years of experience in manufacturing settings developing data-engineering solutions. You should also have experience in evaluating and implementing data-engineering and software technologies, proficiency in programming languages and frameworks such as SQL, Python, Spark, and Databricks, as well as experience in data storages, developing data solutions, and utilizing data visualization tools. Preferred qualifications include a Masters or PhD degree in Computer Science or related Engineering areas with five (5) years of experience in developing data-engineering solutions. Strong programming skills, knowledge of machine learning theory, and practical development experience are also desirable for this role. Join Linde, a leading global industrial gases and engineering company, and be part of a team that is dedicated to making the world more productive every day. Explore limitless opportunities for personal and professional growth while making a positive impact on the world. Be Linde. Be Limitless. If you are inspired by our mission and ready to contribute your skills and expertise, we look forward to receiving your complete application via our online job market. Let's talk about how you can be part of our dynamic team at Linde.,
Posted 1 week ago
13.0 - 17.0 years
0 Lacs
karnataka
On-site
As a Head of Quality Assurance at Commcise located in Bangalore, you will play a crucial role in managing testing activities to ensure the best user product experience. With 13-15 years of relevant experience, you will need to have an Engineering or IT Degree. Your strong expertise in software testing concepts and methodologies, along with excellent communication skills and technical aptitude, especially in automation, will be essential for this role. Your responsibilities will include having a deep understanding of capital markets, trading platforms, wealth management, and regulatory frameworks such as MiFID, SEC, SEBI, FCA. Experience with financial instruments and post-trade processes will also be necessary. You will be required to define and implement comprehensive testing strategies covering functional and non-functional testing, as well as developing test governance models and enforcing QA best practices. Your role will involve a strong grasp of programming concepts, coding standards, and test frameworks like Java, Python, and JavaScript. Expertise in test automation frameworks such as Selenium and Appium, as well as API testing and knowledge of connectivity protocols, will be advantageous. Understanding AI and Machine Learning applications in test automation and driving AI-driven automation initiatives will be part of your responsibilities. Experience in continuous testing within CI/CD pipelines, knowledge of infrastructure as code and cloud platforms, and familiarity with observability tools for real-time monitoring will also be required. You should have expertise in performance testing tools, security testing methodologies, and experience with resilience testing and chaos engineering. Strong leadership skills, team development abilities, and stakeholder management across various teams will be crucial in this role. Having an Agile mindset, leading Agile testing transformations, and implementing BDD/TDD practices will be part of your responsibilities. Strong strategic planning and execution skills, along with a willingness to be hands-on when required, will be essential for driving collaborative test strategies. This role offers an opportunity to work in a dynamic environment and contribute significantly to ensuring the quality and reliability of products in the financial technology industry.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
You should have a Bachelor's degree in Computer Science or a related field, or equivalent experience. With at least 3 years of experience in a similar role, you must be proficient in at least one backend programming language such as Java, Python, or Go. Additionally, you should have hands-on experience with cloud platforms like AWS, Azure, or GCP. A strong understanding of DevOps principles and practices is essential for this role, along with experience in containerization technologies like Docker and Kubernetes. You should also be familiar with configuration management tools such as Ansible, Puppet, or Chef, and have worked with CI/CD tools like Jenkins or GitLab CI. Excellent problem-solving and troubleshooting skills are a must, along with strong communication and collaboration abilities. Previous experience with databases like PostgreSQL or MySQL, as well as monitoring and logging tools such as Prometheus, Grafana, and ELK stack, is required. Knowledge of security best practices and serverless technologies will be beneficial for this position. This job opportunity was posted by Ashok Kumar Samal from HDIP.,
Posted 1 week ago
15.0 - 19.0 years
0 Lacs
karnataka
On-site
The responsibilities of the role include partnering with and acting as a trusted advisor to partners in both Consulting Sales and Delivery to assist in defining and delivering high-quality enterprise-capable solutions. You will work closely with team members to develop practical roadmaps for moving the enterprise towards the future state vision, considering business, technical, and delivery constraints. Analyzing partner requirements, current state architecture, and gaps to create a future state architecture vision for parts of the enterprise with a focus on reduced complexity, cost efficiencies, reuse, convergence, reduced risk, and/or improved business capabilities is a key aspect of the role. Additionally, you will participate in defining and operating the architecture governance process to ensure change initiatives align with the vision and roadmaps. Working closely with Domain Architects across key initiatives and projects to apply architecture principles and standards, and develop reference architectures and design patterns is also part of the responsibilities. Communication of principles, standards, vision, and roadmaps to partners and proactively addressing any questions or concerns identified is essential. Providing thought leadership on architectural topics, developing a forward-looking view of current and emerging technologies, and their impact on Enterprise Architecture are also important aspects of the role. Embedding Platform Thinking in all activities, owning and enhancing workflows and processes, promoting an environment of learning and development, and fostering the professional growth of team members are key responsibilities. The ideal candidate will possess a Bachelor's Degree in Engineering, Computer Science, or equivalent, with a Master's degree in Business or Technology being an advantage. A formal architecture certification such as TOGAF or equivalent is required. Candidates should have at least 15 years of experience in the IT industry, preferably in large, complex enterprises, with at least 7 years of experience in Enterprise Architecture in a large, multi-location, multi-national environment. Deep experience in delivering enterprise-scale IT solutions in a heterogeneous technology environment is necessary. Demonstrated expertise in Application Architecture, including EAI, Microservices, and Cloud native technologies, as well as experience in Domain-driven and Event-driven architecture and technologies such as Kafka and Spark, are preferred. Experience with architecting, designing, and developing large-scale retail business banking solutions using Open systems, messaging, dedication DB solutions, log analysis, log-based monitoring, and metrics-driven monitoring is desired. Familiarity with standard process methodologies, formal Architecture frameworks/methodologies, architecture governance frameworks, and heterogeneous technology platforms is expected. A solid understanding of all domains of Enterprise Architecture and practical experience in data modeling, object modeling, design patterns, and Enterprise Architecture tools is required. The candidate should have experience leading teams in the successful deployment of applications built on Cloud or on-prem enterprise environments for large Tier-1 Banks and Financial institutions. Experience with migrating from legacy applications to solutions ensuring minimal downtime, reduced risk, and excellent customer experience is beneficial. IT Strategy consulting experience is an advantage. Excellent verbal, written, and presentation skills are necessary for effectively communicating complex topics. The candidate should be able to think conceptually, identify patterns across different situations, drive consensus among partners with conflicting viewpoints, and manage people and teams effectively. Collaboration skills and the ability to motivate diverse teams are essential for success in this role.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
maharashtra
On-site
As a highly skilled Backend Developer, you will utilize your expertise in Kotlin and Java to design, develop, and deploy scalable backend services and microservices for modern cloud-native applications. Your key responsibilities will include building RESTful APIs, deploying applications on AWS, containerizing services using Docker and Kubernetes, implementing monitoring solutions, and optimizing performance and reliability. You will be expected to work closely with frontend developers, DevOps engineers, and product managers to ensure seamless integration and functionality. Your strong programming experience in Kotlin and Java, along with knowledge of RESTful APIs, AWS services, Kubernetes, Docker, and CI/CD pipelines will be essential in this role. Additionally, familiarity with databases, software engineering best practices, and design patterns is required. Preferred skills such as experience with reactive programming, Infrastructure as Code using Terraform or CloudFormation, event-driven architectures, and knowledge of secure coding practices and application monitoring tools are a plus. With 6-8 years of experience in Java Development, including Core Java, Hibernate, J2EE, JSP, and Kotlin, you are well-equipped to excel in this position.,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
karnataka
On-site
You will be responsible for developing and maintaining high-performance server-side applications in Python following SOLID design principles. You will design, build, and optimize low-latency, scalable applications and integrate user-facing elements with server-side logic via RESTful APIs. Maintaining ETL and Data pipelines, implementing secure data handling protocols, and managing authentication and authorization across systems will be crucial aspects of your role. Additionally, you will ensure security measures and setup efficient deployment practices using Docker and Kubernetes. Leveraging caching solutions for enhanced performance and scalability will also be part of your responsibilities. To excel in this role, you should have strong experience in Python and proficiency in at least one Python web framework such as FastAPI or Flask. Familiarity with ORM libraries, asynchronous programming, event-driven architecture, and messaging tools like Apache Kafka or RabbitMQ is required. Experience with NoSQL and Vector databases, Docker, Kubernetes, and caching tools like Redis will be beneficial. Additionally, you should possess strong unit testing and debugging skills and the ability to utilize Monitoring and Logging frameworks effectively. You should have a minimum of 1.5 years of professional experience in backend development roles with Python. Your expertise in setting up efficient deployment practices, handling data securely, and optimizing application performance will be essential for success in this position.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As an experienced Java Developer with over 5 years of expertise, you have a strong background in building scalable, distributed, and high-performance microservices utilizing Spring Boot and Apache Kafka. Your proficiency lies in designing and developing event-driven architectures, RESTful APIs, and integrating real-time data pipelines. You are well-versed in the full software development life cycle (SDLC), CI/CD practices, and Agile methodologies. Your key skills include Java (8/11/17), Spring Boot, Spring Cloud, Apache Kafka (Producer, Consumer, Streams, Kafka Connect), Microservices Architecture, RESTful Web Services, Docker, Kubernetes (basic knowledge), CI/CD (Jenkins, Git, Maven), Relational and NoSQL Databases (MySQL, PostgreSQL, MongoDB), Monitoring (ELK Stack, Prometheus, Grafana - basic), Agile/Scrum methodology, and Unit and Integration Testing (JUnit, Mockito). In your professional journey, you have developed and maintained multiple Kafka-based microservices handling real-time data ingestion and processing for high-volume applications. Your expertise extends to implementing Kafka consumers/producers with error-handling, retries, and idempotency for robust message processing. Additionally, you have designed and deployed Spring Boot microservices integrated with Kafka, PostgreSQL, Redis, and external APIs, showcasing your leadership in performance tuning and optimization to ensure low-latency and fault-tolerant behavior. If you are passionate about leveraging your skills in Java, Spring Boot, Apache Kafka, and microservices architecture to drive impactful projects and contribute to cutting-edge technologies, this opportunity might be the perfect match for you. Thank you for considering this role. Best regards, Renuka Thakur renuka.thakur@eminds.ai,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You are an experienced and motivated DevOps and Cloud Engineer with a strong background in cloud infrastructure, automation, and continuous integration/delivery practices. Your role involves designing, implementing, and maintaining scalable, secure, and high-performance cloud environments on platforms like AWS, Azure, or GCP. You will collaborate closely with development and operations teams to ensure seamless workflow. Your key responsibilities include designing, deploying, and managing cloud infrastructure, building and maintaining CI/CD pipelines, automating infrastructure provisioning, monitoring system performance, managing container orchestration platforms, supporting application deployment, and ensuring security best practices in cloud and DevOps workflows. Troubleshooting and resolving infrastructure and deployment issues, along with maintaining up-to-date documentation for systems and processes, are also part of your role. To qualify for this position, you should have a Bachelor's degree in computer science, Engineering, or a related field, along with a minimum of 5 years of experience in DevOps, Cloud Engineering, or similar roles. Proficiency in scripting languages like Python or Bash, hands-on experience with cloud platforms, knowledge of CI/CD tools and practices, and familiarity with containerization and orchestration are essential. Additionally, you should have a strong understanding of cloud security and compliance standards, excellent analytical, troubleshooting, and communication skills. Preferred qualifications include certifications like AWS Certified DevOps Engineer, Azure DevOps Engineer Expert, or equivalent, as well as experience with GitOps, microservices, or serverless architecture. Join our technology team in Trivandrum and contribute to building and maintaining cutting-edge cloud environments while enhancing our DevOps practices.,
Posted 1 week ago
6.0 - 12.0 years
0 Lacs
karnataka
On-site
As a DevOps Engineer at Capgemini, you will have the opportunity to shape your career according to your aspirations in a supportive and inspiring environment. You will work with a collaborative global community of colleagues to push the boundaries of what is achievable. By joining us, you will play a key role in assisting the world's top organizations in harnessing the full potential of technology to create a more sustainable and inclusive world. Your responsibilities will include building and managing CI/CD pipelines using tools such as Jenkins, GitLab CI, and Azure DevOps. You will automate infrastructure deployment using Terraform, Ansible, or CloudFormation, and set up monitoring systems with Prometheus, Grafana, and ELK. Managing containers with Docker and orchestrating them through Kubernetes will be a crucial part of your role. Additionally, you will collaborate closely with developers to integrate DevOps practices into the Software Development Life Cycle (SDLC). To excel in this position, you should ideally possess 6 to 12 years of experience in DevOps, CI/CD, and Infrastructure as Code (IaC). Your expertise should extend to Docker, Kubernetes, and cloud platforms such as AWS, Azure, or GCP. Experience with monitoring tools like Prometheus, Grafana, and ELK is essential, along with knowledge of security, compliance, and performance aspects. Being ready for on-call duties and adept at handling production issues are also required skills for this role. At Capgemini, you will enjoy a flexible work environment with hybrid options, along with a competitive salary and benefits package. Your career growth will be supported through opportunities for SAP and cloud certifications. You will thrive in an inclusive and collaborative workplace culture that values teamwork and diversity. Capgemini is a global leader in business and technology transformation, facilitating organizations in their digital and sustainable evolution. With a diverse team of over 340,000 members across 50 countries, Capgemini leverages its 55-year legacy to deliver comprehensive services and solutions, ranging from strategy and design to engineering. The company's expertise in AI, generative AI, cloud, and data, combined with industry knowledge and partnerships, enables clients to unlock the true potential of technology to meet their business requirements effectively.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a DevOps Engineer at NTT DATA Business Solutions, your role involves implementing and maintaining cloud infrastructure to ensure the smooth operation of the environment. You will be responsible for evaluating new technologies in infrastructure automation and cloud computing, looking for opportunities to enhance performance, reliability, and automation. Additionally, you will provide DevOps capability to team members and customers, perform code deployments, and manage release activities. Your responsibilities will also include resolving incidents and change requests, documenting solutions, and communicating them to users. You will work on optimizing existing solutions, diagnosing, troubleshooting, and resolving issues to ensure the smooth operation of services. Demonstrating a proactive attitude and aptitude for taking ownership of your work and collaborating with team members will be crucial. To excel in this role, you are required to have a Bachelor's degree in IT, computer science, computer engineering, or a related field, along with a minimum of 6 years of overall experience with at least 3 years as a DevOps Engineer. Advanced experience with Cloud Infrastructure and Cloud Services, particularly on Microsoft Azure, is essential. You should also have expertise in container orchestration (Kubernetes, Docker, Helm), Linux scripting (Bash, Python), log and metrics management (ELK Stack), monitoring tools (Prometheus, Loki, Grafana, Dynatrace), and infrastructure as code (Terraform). Furthermore, you must be proficient in continuous integration/continuous delivery tools (Gitlab CI, Jenkins, Nexus), infrastructure security principles, Helm, CI/CD pipelines configuration, and DevOps tools like Jenkins, SonarQube, Nexus, etc. Exposure to SDLC and Agile processes, SSO integrations, and AI tools is desirable. In addition to technical skills, you should possess strong attitude, soft, and communication skills. Experience in handling technically critical situations, driving expert teams, and providing innovative solutions is essential. Critical thinking, a DevOps mindset, and customer-centric thinking are key attributes for this role. Proficiency in English (written and spoken) is mandatory, while knowledge of other languages such as German or French is a plus. If you are looking to join a dynamic team at NTT DATA Business Solutions and transform SAP solutions into value, this opportunity is for you. Get empowered by our innovative and collaborative work environment. For further inquiries regarding this position, please contact the Recruiter, Pragya Kalra, at Pragya.Kalra@nttdata.com. Join us in our mission to deliver cutting-edge IT solutions and become a part of our global success story!,
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
We are seeking a skilled and dedicated FreeSWITCH Engineer with hands-on experience in VoIP systems to join our team. As a FreeSWITCH Engineer, you will be instrumental in the development, configuration, and maintenance of scalable and reliable FreeSWITCH-based voice infrastructures. Your responsibilities will include designing, deploying, and maintaining FreeSWITCH servers and related VoIP infrastructure. You will troubleshoot and resolve FreeSWITCH and VoIP-related issues, develop custom dial plans, modules, and call routing logic, and work with SIP, RTP, and related VoIP protocols. Monitoring system performance, ensuring high availability, collaborating with development, network, and support teams, and documenting configurations and system changes will also be part of your role. To be successful in this position, you should have a minimum of 2 years of hands-on experience with FreeSWITCH in a production environment, a strong understanding of VoIP technologies and SIP protocol, experience with Linux system administration, and familiarity with scripting languages such as Bash, Python, and Lua. The ability to work independently in a remote setup, strong problem-solving and analytical skills are also essential. Preferred skills include experience with other VoIP platforms like Asterisk, Kamailio, OpenSIPS, knowledge of WebRTC, RTP engines, or media servers, exposure to monitoring tools like Grafana and Prometheus, familiarity with APIs and backend integration. Join us for a collaborative and supportive team environment where you will have the opportunity to work on innovative VoIP solutions at scale.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
The ideal candidate for this role should possess the following technical skills: - Proficiency in Java/J2EE, Spring/Spring Boot/Quarkus Frameworks, Microservices, Angular, Oracle, PostgreSQL, MongoDB - Experience with AWS services such as S3, Lambda, EC2, EKS, CloudWatch - Familiarity with Event Streaming using Kafka, Docker, and Kubernetes - Knowledge of GitHub and experience with CI/CD Pipeline In addition to the above, it would be beneficial for the candidate to also have the following technical skills: - Hands-on experience with cloud platforms like AWS, Azure, or GCP - Understanding of CI/CD pipelines and tools like Jenkins, GitLab CI/CD - Familiarity with monitoring and logging tools such as Prometheus and Grafana Overall, the successful candidate will be someone with a strong technical background in various technologies and platforms, along with the ability to adapt to new tools and frameworks as needed.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Cloud Architect at FICO, you will play a crucial role in architecting, designing, implementing, and managing cloud infrastructure solutions using tools like ArgoCD, Crossplane, GitHub, Terraform, and Kubernetes. You will lead initiatives to enhance our Cloud and GitOps best practices, mentor junior team members, collaborate with cross-functional teams, and ensure that our cloud environments are scalable, secure, and cost-effective. Your responsibilities will include designing, deploying, and managing scalable cloud solutions on public cloud platforms such as AWS, Azure, or Google Cloud, developing deployment strategies, utilizing Infrastructure as Code tools like Terraform and Crossplane, collaborating with various teams, providing mentorship, evaluating and recommending new tools and technologies, implementing security best practices, ensuring compliance with industry standards, and much more. To be successful in this role, you should have proven experience as a Senior level engineer/Architect in a cloud-native environment, extensive experience with ArgoCD and Crossplane, proficiency in GitHub workflows, experience with Infrastructure as Code tools, leadership experience, proficiency in scripting languages and automation tools, expert knowledge in containerization and orchestration tools like Docker and Kubernetes, network concepts and implementation on AWS, observability, monitoring and logging tools, security principles and frameworks, and familiarity with security-related certifications. Your unique strengths, leadership skills, and ability to drive and motivate a team will be essential in fulfilling the responsibilities of this role. At FICO, you will be part of an inclusive culture that values diversity, collaboration, and innovation. You will have the opportunity to make an impact, develop professionally, and participate in valuable learning experiences. FICO offers competitive compensation, benefits, and rewards programs to encourage you to bring your best every day. You will work in an engaging, people-first environment that promotes work/life balance, employee resource groups, and social events to foster interaction and camaraderie. Join FICO and be part of a leading organization in Big Data analytics, making a real difference in the business world by helping businesses use data to improve their decision-making processes. FICO's solutions are used by top lenders and financial institutions worldwide, and the demand for our solutions is rapidly growing. As part of the FICO team, you will have the support and freedom to develop your skills, grow your career, and contribute to changing the way businesses operate globally. Explore how you can fulfill your potential by joining FICO at www.fico.com/Careers.,
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
punjab
On-site
You should have a minimum of 5-9 years of experience in Quality Assurance, with at least 2 years focused on test automation. It is essential that you have proven experience leading QA efforts for at least one major software project. Additionally, you should possess demonstrated experience in a full-stack environment, preferably with a strong understanding of MEAN/MERN architecture. Your expertise should include designing, developing, and maintaining robust and scalable test automation frameworks from scratch. You must be proficient in at least one modern programming language relevant to the project's stack, such as JavaScript or TypeScript. Moreover, you should have in-depth knowledge of test automation tools for both front-end and back-end testing, including frameworks like Cypress, Playwright, Selenium, Postman (with scripting), Newman, Mocha, and Chai. It is crucial for you to have a solid understanding of testing methodologies, including unit testing, integration testing, end-to-end testing, and regression testing. Experience with version control systems, specifically Git, is also required. In terms of soft skills, you should have the ability to create, document, and manage comprehensive test plans, strategies, and test cases. Additionally, experience in leading and mentoring a small team of QA engineers is highly valued. Desirable qualifications include prior experience as a developer, experience with security testing and non-functional testing, familiarity with CI/CD pipelines, knowledge of cloud platforms like AWS, GCP, or Azure, and experience with containerization technologies like Docker. You should also be familiar with other testing frameworks and tools like Jest, Mocha, Chai, monitoring and logging tools, and relevant certifications such as ISTQB Foundation Level or Agile Tester. Having strong attention to detail, a proactive approach to quality, and experience with defect tracking and project management tools are also beneficial for this role.,
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
maharashtra
On-site
As a Kubernetes Administrator/DevOps Senior Consultant, you will be responsible for designing, provisioning, and managing Kubernetes clusters for applications based on micro-services and event-driven architectures. Your role will involve ensuring seamless integration of applications with Kubernetes orchestrated environments and configuring and managing Kubernetes resources such as pods, services, deployments, and namespaces. Monitoring and troubleshooting Kubernetes clusters to identify and resolve performance issues, system errors, and other operational challenges will be a key aspect of your responsibilities. You will also be required to implement infrastructure as code (IAC) using tools like Ansible and Terraform for configuration management. Furthermore, you will design and implement cluster and application monitoring using tools like Prometheus, Grafana, OpenTelemetry, and Datadog. Managing and optimizing AWS cloud resources and infrastructure for Managed containerized environments (ECR, EKS, Fargate, EC2) will be a part of your daily tasks. Ensuring high availability, scalability, and security of all infrastructure components, monitoring system performance, identifying bottlenecks, and implementing necessary optimizations are also crucial responsibilities. Your role will involve troubleshooting and resolving complex issues related to the DevOps stack, developing and maintaining documentation for DevOps processes and best practices, and staying current with industry trends and emerging technologies to drive continuous improvement. Creating and managing DevOps pipelines, IAC, CI/CD, and Cloud Platforms will also be part of your duties. **Required Skills:** - 4-5 years of extensive hands-on experience in Kubernetes Administration, Docker, Ansible/Terraform, AWS, EKS, and corresponding cloud environments. - Hands-on experience in designing and implementing Service Discovery, Service Mesh, and Load Balancers. - Extensive experience in defining and creating declarative files in YAML for provisioning. - Experience in troubleshooting containerized environments using a combination of Monitoring tools/logs. - Scripting and automation skills (e.g., Bash, Python) for managing Kubernetes configurations and deployments. - Hands-on experience with Helm charts, API gateways, ingress/egress gateways, and service meshes (ISTIO, etc.). - Hands-on experience in managing Kubernetes Network (Services, Endpoints, DNS, Load Balancers) and storages (PV, PVC, Storage Classes, Provisioners). - Design, enhance, and implement additional services for centralized Observability Platforms, ensuring efficient log management based on the Elastic Stack, and effective monitoring and alerting powered by Prometheus. - Design and Implement CI/CD pipelines, hands-on experience in IAC, git, monitoring tools like Prometheus, Grafana, Kibana, etc. **Good to Have Skills:** - Relevant certifications (e.g., Certified Kubernetes Administrator CKA / CKAD) are a plus. - Experience with cloud platforms (e.g., AWS, Azure, GCP) and their managed Kubernetes services. - Perform capacity planning for Kubernetes clusters and optimize costs in On-Prem and cloud environments. **Preferred Experience:** - 4-5 years of experience in Kubernetes, Docker/Containerization.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough