Jobs
Interviews

1629 Cloud Platforms Jobs - Page 38

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

7 - 12 Lacs

Hyderabad, Bengaluru

Work from Office

Job Summary Synechron is seeking an experienced Big Data Developer with strong expertise in Spark, Scala, and Python to lead and contribute to large-scale data projects. The role involves designing, developing, and implementing robust data solutions that leverage emerging technologies to enhance business insights and operational efficiency. The successful candidate will play a key role in driving innovation, mentoring team members, and ensuring the delivery of high-quality data products aligned with organizational objectives. Software Requirements Required: Apache Spark (latest stable version) Scala (version 2.12 or higher) Python (version 3.6 or higher) Big Data tools and frameworks supporting Spark and Scala Preferred: Cloud platforms such as AWS, Azure, or GCP for data deployment Data processing or orchestration tools like Kafka, Hadoop, or Airflow Data visualization tools for data insights Overall Responsibilities Lead the development and implementation of data pipelines and solutions using Spark, Scala, and Python Collaborate with business and technology teams to understand data requirements and translate them into scalable solutions Mentor and guide junior team members on best practices in big data development Evaluate and recommend new technologies and tools to improve data processing and quality Stay informed about industry trends and emerging technologies relevant to big data and analytics Ensure timely delivery of data projects with high standards of quality, performance, and security Lead technical reviews, code reviews, and provide inputs to improve overall development standards and practices Contribute to architecture design discussions and assist in establishing data governance standards Technical Skills (By Category) Programming Languages: Essential: Spark (Scala), Python Preferred: Knowledge of Java or other JVM languages Data Management & Databases: Experience with distributed data storage solutions (HDFS, S3, etc.) Familiarity with NoSQL databases (e.g., Cassandra, HBase) and relational databases for data integration Cloud Technologies: Preferred: Cloud platforms (AWS, Azure, GCP) for data processing, storage, and deployment Frameworks & Libraries: Spark MLlib, Spark SQL, Spark Streaming Data processing libraries in Python (pandas, PySpark) Development Tools & Methodologies: Version control (Git, Bitbucket) Agile methodologies (Scrum, Kanban) Data pipeline orchestration tools (Apache Airflow, NiFi) Security & Compliance: Understanding of data security best practices and data privacy regulations Experience Requirements 5 to 10 years of hands-on experience in big data development and architecture Proven experience in designing and developing large-scale data pipelines using Spark, Scala, and Python Demonstrated ability to lead technical projects and mentor team members Experience working with cross-functional teams including data analysts, data scientists, and business stakeholders Track record of delivering scalable, efficient, and secure data solutions in complex environments Day-to-Day Activities Develop, test, and optimize scalable data pipelines using Spark, Scala, and Python Collaborate with data engineers, analysts, and stakeholders to gather requirements and translate into technical solutions Lead code reviews, mentor junior team members, and enforce coding standards Participate in architecture design and recommend best practices in big data development Monitor data workflows performance and troubleshoot issues to ensure data quality and reliability Stay updated with industry trends and evaluate new tools and frameworks for potential implementation Document technical designs, data flows, and implementation procedures Contribute to continuous improvement initiatives to optimize data processing workflows Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field Relevant certifications in cloud platforms, big data, or programming languages are advantageous Continuous learning on innovative data technologies and frameworks Professional Competencies Strong analytical and problem-solving skills with a focus on scalable data solutions Leadership qualities with the ability to guide and mentor team members Excellent communication skills to articulate technical concepts to diverse audiences Ability to work collaboratively in cross-functional teams and fast-paced environments Adaptability to evolving technologies and industry trends Strong organizational skills for managing multiple projects and priorities

Posted 1 month ago

Apply

5.0 - 10.0 years

14 - 18 Lacs

Chennai, Bengaluru

Work from Office

Job Summary Synechron is seeking a skilled and experienced Finacle Developer to join our dynamic team. This role involves designing, developing, and customizing Finacle Core Banking solutions, with a focus on architecture, client interfacing, and process optimization. The ideal candidate will collaborate closely with clients and internal teams to deliver high-quality banking solutions that meet specific business requirements, ensuring Robustness, scalability, and security. This position plays a critical role in enhancing our core banking systems, supporting digital transformation initiatives, and driving innovation within the banking domain. Software Required Software Proficiency: Finacle 11 Core Banking Product (latest versions preferred) SQL & PL/SQL (Oracle Database) Unix/Linux Shell Scripting Finacle Scripting Language (if applicable) Connect 24 / Finacle Integrator Jasper Reports & MRT Reporting Tools Reporting Tools (e.g., Reports Builder) Development & Version Control Tools (e.g., Git, SVN) Preferred Software Skills: Finacle CRM and Admin Modules (FINFADM/SSOADM) Cloud Platforms (e.g., AWS, Azure) Containerization & Microservices (e.g., Docker) CI/CD Tools (e.g., Jenkins) Overall Responsibilities Design, develop, and customize Finacle core banking modules in accordance with client requirements and architecture standards Handle interfacing requirements, including APIs, Connect 24, and Finacle Integrator Collaborate directly with clients and stakeholders for requirement gathering, customization, and issue resolution Lead and implement customization flows such as new menu creation, reports (MRT & Jasper), batch jobs, and process automation (FI, EOD, BOD) Debug, troubleshoot, and optimize Finacle processes and scripts Ensure adherence to best practices for coding, security, documentation, and testing Participate in Agile development cycles, including sprint planning, stand-ups, and reviews Research and recommend improvements for system performance, security, and scalability Contribute to infrastructure design and system architecture reviews Technical Skills (By Category) Programming Languages: RequiredSQL, PL/SQL, Shell Scripting (Unix/Linux) PreferredFinacle scripting language, Java (experience in customization & extension development) Databases/Data Management: RequiredOracle DB, proficient in SQL, Stored Procedures, Functions, Triggers PreferredExperience with NoSQL or other RDBMS Cloud Technologies: PreferredBasic knowledge of cloud platforms (AWS, Azure) for future scalability Frameworks and Libraries: RequiredFinacle customization framework, reporting tools (Jasper, MRT) PreferredMicroservices architecture, RESTful API development Development Tools and Methodologies: RequiredVersion control (Git/SVN), Agile/Scrum methodologies PreferredDevOps practices, CI/CD pipelines Security Protocols: Implement secure coding practices; familiarity with banking security standards and encryption protocols Experience 5-12 years of experience working with Finacle Core Banking product (Infosys/Edgeverve) Proven expertise with Finacle 11, including customization, architecture, and product capabilities Hands-on experience in handling Finacle interfacing requirements (APIs, Connect 24, Finacle Integrator) Direct client interaction experience, understanding client needs, and delivering solutions accordingly Industry experience in banking or financial services preferred Certifications such as Finacle Certification from Infosys are a plus Day-to-Day Activities Engage in requirement gathering and solution design discussions with clients Develop and customize Finacle modules based on specifications Build and maintain interfaces (APIs, Connect 24, Integrator processes) Conduct testing, debugging, and performance tuning of Finacle processes Collaborate with cross-functional teams during sprint cycles and project planning Document technical specifications, system configurations, and customization details Review code, ensure quality standards, and participate in peer code reviews Provide timely support and issue resolution during and post-implementation Stay updated on Finacle product enhancements and banking technology trends Qualifications Educational QualificationBachelors or Masters Degree in Computer Science, Information Technology, or related field; equivalent industry experience accepted CertificationsFinacle Certification from Infosys (preferred) Continuous learning and professional development in banking systems and related technologies Professional Competencies Strong analytical and problem-solving skills, with an ability to troubleshoot complex issues Effective communication skills, capable of interacting with technical and non-technical stakeholders Demonstrated ability to influence cross-functional teams and manage multiple priorities Adaptability to evolving banking technology landscapes and process improvements Innovative mindset, with a focus on delivering scalable, secure, and sustainable solutions Time management and organization skills, with ability to meet project deadlines S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice

Posted 1 month ago

Apply

5.0 - 9.0 years

16 - 20 Lacs

Pune

Work from Office

Job Summary Synechron is seeking an experienced Site Reliability Engineer (SRE) / DevOps Engineer to lead the design, implementation, and management of reliable, scalable, and efficient infrastructure solutions. This role is pivotal in ensuring optimal performance, availability, and security of our applications and services through advanced automation, continuous deployment, and proactive monitoring. The ideal candidate will collaborate closely with development, operations, and security teams to foster a culture of continuous improvement and technological innovation. Software Required Skills: Proficiency with cloud platforms such as AWS, GCP, or Azure Expertise with container orchestration tools like Kubernetes and Docker Experience with Infrastructure as Code (IaC) tools such as Terraform or CloudFormation Hands-on experience with CI/CD pipelines using Jenkins, GitLab CI, or similar Strong scripting skills in Python, Bash, or similar languages Preferred Skills: Familiarity with monitoring and logging tools like Prometheus, Grafana, ELK stack Knowledge of configuration management tools such as Ansible, Chef, or Puppet Experience implementing security best practices in cloud environments Understanding of microservices architecture and service mesh frameworks like Istio or Linkerd Overall Responsibilities Lead the development, deployment, and maintenance of scalable, resilient infrastructure solutions. Automate routine tasks and processes to improve efficiency and reduce manual intervention. Implement and refine monitoring, alerting, and incident response strategies to maintain high system availability. Collaborate with software development teams to integrate DevOps best practices into product development cycles. Guide and mentor team members on emerging technologies and industry best practices. Ensure compliance with security standards and manage risk through security controls and assessments. Stay abreast of the latest advancements in SRE, cloud computing, and automation technologies to recommend innovative solutions aligned with organizational goals. Technical Skills (By Category) Cloud Technologies: EssentialAWS, GCP, or Azure (both infrastructure management and deployment) PreferredMulti-cloud management, cloud cost optimization Containers and Orchestration: EssentialDocker, Kubernetes PreferredService mesh frameworks like Istio, Linkerd Automation & Infrastructure as Code: EssentialTerraform, CloudFormation, or similar PreferredAnsible, SaltStack Monitoring & Logging: EssentialPrometheus, Grafana, ELK Stack PreferredDataDog, New Relic, Splunk Security & Compliance: Knowledge of identity and access management (IAM), encryption, vulnerability management Development & Scripting: EssentialPython, Bash scripting PreferredGo, PowerShell Experience 5-9 years of experience in software engineering, systems administration, or DevOps/SRE roles. Proven track record in designing and deploying large-scale, high-availability systems. Hands-on experience with cloud infrastructure automation and container orchestration. Past roles leading incident management, performance tuning, and security enhancements. Experience in working with cross-functional teams using Agile methodologies. BonusExperience with emerging technologies like Blockchain, IoT, or AI integrations. Day-to-Day Activities Architect, deploy, and maintain cloud infrastructure and containerized environments. Develop automation scripts and frameworks to streamline deployment and operations. Monitor system health, analyze logs, and troubleshoot issues proactively. Conduct capacity planning and performance tuning. Collaborate with development teams to integrate new features into production with zero downtime. Participate in incident response, post-mortem analysis, and continuous improvement initiatives. Document procedures, guidelines, and best practices for the team. Stay updated on evolving SRE technologies and industry trends, applying them to enhance our infrastructure. Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or related field. Certifications in cloud platforms (AWS Certified Solutions Architect, Azure DevOps Engineer, Google Professional Cloud Engineer) are preferred. Additional certifications in Kubernetes, Terraform, or security are advantageous. Professional Competencies Strong analytical and problem-solving abilities. Excellent collaboration and communication skills. Leadership qualities with an ability to mentor junior team members. Ability to work under pressure and manage multiple priorities. Commitment to best practices around automation, security, and reliability. Eagerness to learn emerging technologies and adapt to evolving workflows. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice

Posted 1 month ago

Apply

12.0 - 17.0 years

20 - 25 Lacs

Pune

Work from Office

Job Summary Synechron is seeking an experienced Scrum Master with a strong background in DevOps to lead our Other Technologies team. This role is essential in managing and prioritizing development initiatives that align with the companys business objectives while staying abreast of industry advancements. The Scrum Master will collaborate with cross-functional teams to drive innovation, solve complex problems, and ensure the delivery of high-quality solutions for our clients. Software Required Software Skills: Proficiency in DevOps tools and practices. Advanced knowledge in cloud computing platforms (AWS, Azure, GCP). Familiarity with AI/ML frameworks and tools. Experience with blockchain technologies and IoT platforms. Preferred Software Skills: Knowledge of Agile/Scrum project management tools (e.g., JIRA, Trello). Experience with CI/CD pipelines and automation tools. Overall Responsibilities Lead the Other Technologies team in delivering high-quality solutions, ensuring alignment with business goals. Develop and implement policies and procedures to support the effective use of Other Technologies. Collaborate with cross-functional teams to foster innovation and drive problem-solving initiatives. Maintain up-to-date knowledge of industry trends in Other Technologies. Technical Skills (By Category) Programming Languages: RequiredFamiliarity with languages used in cloud computing and AI/ML (e.g., Python, Java, JavaScript). PreferredExperience with blockchain-specific languages (e.g., Solidity). Databases/Data Management: EssentialUnderstanding of database management in cloud environments. Cloud Technologies: EssentialStrong expertise in cloud platforms like AWS, Azure, and GCP. Frameworks and Libraries: EssentialProficiency in AI/ML frameworks like TensorFlow or PyTorch. Development Tools and Methodologies: RequiredDeep understanding of Agile and Scrum methodologies. Security Protocols: PreferredKnowledge of security practices in cloud and IoT environments. Experience 12+ years of experience in Other Technologies development. Strong track record in leading technology projects and delivering business value. Experience in mentoring and guiding technical teams. Day-to-Day Activities Lead and manage Other Technologies projects to ensure timely delivery of deliverables. Communicate with stakeholders to gather requirements and present insights. Mentor and guide team members to maintain high-quality work standards. Stay informed on industry trends and integrate relevant advancements. Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field. Professional certifications in relevant Other Technologies are a plus. Professional Competencies Strong leadership and mentorship skills to guide teams effectively. Excellent communication and interpersonal skills for stakeholder management. Ability to work in a fast-paced, dynamic environment with a collaborative mindset. Customer-focused and solution-oriented approach to problem-solving. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law . Candidate Application Notice

Posted 1 month ago

Apply

0.0 years

12 - 17 Lacs

Pune

Work from Office

: Job TitleGCP DevOps Engineer, AS LocationPune, India Role Description Technology plays a critical role in Deutsche Bank's transformation. Technology, Data and Innovation (TDI) has been established as one technology division for the bank, driving an integrated IT, data, and security agenda across the bank. The function is responsible for implementing the banks technology strategy focused on strengthening engineering expertise, introducing an agile delivery model, reducing administrative overheads, de-coupling assets within our IT estate for faster, cheaper deployment, as well as modernizing the banks IT infrastructure with long-term investments and benefiting from cloud computing. As part of the TDI, we provide a GCP based data platform for multiple teams within the bank. Our aim is to support minimizing data replication and duplication throughout the enterprise by providing infrastructure for ingestion, storage, transformation, analysis, machine learning and visualization that is secure, scalable, performant, compliant by design and meet functional requirements for exploitation of data. As part of this Role, we are seeking a highly motivated and experienced GCP DevOps Engineer to join our team. In this role, you will play a critical role in designing, developing, and maintaining robust data platform. What well offer you 100% reimbursement under childcare assistance benefit (gender neutral) Sponsorship for Industry relevant certifications and education Accident and Term life Insurance Your key responsibilities Design, build, and maintain scalable, secure, and high-availability infrastructure on GCP Cloud platform Manage infrastructure as code using tools like Terraform. Develop and optimize CI/CD pipelines to ensure smooth, efficient and reliable data workload releases using GitHub Actions Deliver high quality software and to be passionate about software engineering Automate repetitive tasks and process to improve efficiency and reduce errors. Develop and maintain scripts for infrastructure automation, monitoring and deployments Implement and enforce security best practices across the infrastructure and deployment processes Your skills and experience Proficiency in cloud platforms such Google Cloud (preferred), AWS or Azzure-must Proficiency in Infrastructure as Code Terraform Experience in CI/CD tools-would be a plus GitHub Actions CI/CD experience-would be a plus Exposure to delivering good quality code within enterprise scale development Experience in software development processes, models, lifecycles and methodologies. Experience in software development and scripting in at least one language (Java, Python, Bash)-would be a plus How well support you About us and our teams Please visit our company website for further information: https://www.db.com/company/company.htm We strive for a culture in which we are empowered to excel together every day. This includes acting responsibly, thinking commercially, taking initiative and working collaboratively. Together we share and celebrate the successes of our people. Together we are Deutsche Bank Group. We welcome applications from all people and promote a positive, fair and inclusive work environment.

Posted 1 month ago

Apply

18.0 - 20.0 years

11 - 15 Lacs

Thiruvananthapuram

Work from Office

We are seeking a highly skilled Technical Architect with over 15+ years of experience in software development, including 7+ years in system architecture. The ideal candidate should have expertise in Java technologies, cloud platforms, modern architectural patterns, Red Hat OpenShift, Red Hat AMQ (ActiveMQ), and databases like Oracle and PostgreSQL. Experience with accessibility standards like AODA or WCAG 2.0 is also crucial. KEY RESPONSIBILITIES Architect and design large-scale, high-performance systems with a focus on scalability, maintainability, and security. Lead the design and implementation of cloud-native solutions, with a primary focus on Microsoft Azure and Red Hat OpenShift. Implement microservices architecture and ensure secure, performant, and scalable API integrations. Design and manage CI/CD pipelines to streamline and automate deployment processes. Implement messaging solutions using Red Hat AMQ (ActiveMQ) for asynchronous communication between services. Ensure compliance with security best practices, including OWASP guidelines, across all development stages. Utilize Red Hat OpenShift for containerization and orchestration of microservices-based applications. Collaborate with database teams to design, manage, and optimize Oracle and PostgreSQL databases. Ensure that all designs and implementations meet accessibility standards such as AODA or WCAG 2.0. Provide technical leadership and mentoring to development teams, enforcing modern development practices and architectural standards. TECHNICAL SKILLS Programming Languages: Strong expertise in Java, Spring Framework (Spring Boot, Spring Security, Spring MVC), Hibernate, and JPA. Experience with frontend frameworks, particularly Angular, TypeScript, and JavaScript. Cloud & DevOps: Expertise in Microsoft Azure services such as Azure App Services, Azure Functions, Azure DevOps, and Azure VMs. Proficiency in Red Hat OpenShift for managing and deploying containerized applications. Proficiency in CI/CD pipeline tools like Jenkins, GitLab CI, or Azure DevOps. Experience with Infrastructure as Code (IaC) using Terraform or Azure Resource Manager (ARM) templates. Strong experience in container orchestration with Docker and Kubernetes, particularly on Red Hat OpenShift. Messaging & Integration: Expertise in Red Hat AMQ (ActiveMQ) for building reliable and scalable messaging systems. Experience with enterprise integration patterns and designing event-driven architectures using messaging brokers. Microservices & API Design: Extensive experience designing microservices architecture and implementing APIs using RESTful services, GraphQL, or gRPC. Experience with API Gateway services and service mesh technologies such as Istio. Database Technologies: Expertise in Oracle and PostgreSQL databases, including database design, performance tuning, and query optimization. Security: Knowledge of security best practices, including OAuth2, JWT, SAML, and RBAC. Experience with OWASP Top 10 security guidelines and encryption standards. Accessibility Standards: Ensure compliance with AODA (Accessibility for Ontarians with Disabilities Act) or WCAG 2.0 (Web Content Accessibility Guidelines) to create inclusive and accessible applications. Monitoring & Logging: Experience with monitoring and logging tools like Splunk, ELK (Elasticsearch, Logstash, Kibana), Prometheus, or Grafana for system performance and troubleshooting. ADDITIONAL QUALIFICATIONS: Experience working with Agile/Scrum methodologies and leading cross-functional teams. Strong communication and leadership skills, with the ability to mentor and guide development teams. Azure, Red Hat OpenShift, or Red Hat AMQ certifications are a plus.

Posted 1 month ago

Apply

3.0 - 8.0 years

9 - 19 Lacs

Coimbatore

Hybrid

: Generative AI Engineer : 3 to 5 years : We are looking for a Generative AI Engineer with 3 to 5 years of hands-on experience in Retrieval-Augmented Generation (RAG), Agentic AI, and Data Pipelines. The ideal candidate will have real-time experience in developing and deploying AI-powered solutions, working with advanced language models, and optimizing AI workflows for production environments. : Implement and optimize Retrieval-Augmented Generation (RAG) techniques to enhance AI response quality. Develop and deploy Agentic AI systems capable of autonomous decision-making and task execution. Build and manage data pipelines for processing, transforming, and feeding structured/unstructured data into AI models. Ensure scalability, performance, and security of AI-driven solutions in production environments. Collaborate with cross-functional teams, including data engineers, software developers, and product managers. Conduct experiments and evaluations to improve AI system accuracy and efficiency. Stay updated with the latest advancements in AI/ML research, open-source models, and industry best practices. & : Hands-on experience with RAG architectures, including vector databases (e.g., Pinecone, ChromaDB, Weaviate, OpenSearch, FAISS). Experience in building AI agents using LangChain, LangGraph, CrewAI, AutoGPT, or similar frameworks. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow. Knowledge of cloud platforms (AWS/GCP/Azure) and containerization technologies (Docker, Kubernetes). Familiarity with LLM APIs (OpenAI, Anthropic, Mistral, Cohere, Llama, etc.) and their integration in applications. Strong understanding of vector search, embedding models, and hybrid retrieval techniques. Experience with optimizing inference and serving AI models in real-time production systems. -- : Experience with multi-modal AI (text, image, audio) and LLM fine tuning. Familiarity with privacy-preserving AI techniques and responsible AI frameworks. Understanding of MLOps best practices, including model versioning, monitoring, and deployment automation. ____________________________________________________________________________ "Python/Gen AI Developer" Experience: 5 to 8 Location: Coimbatore/Remote Notice Period: Immediate Joiners are Preferred : Design, develop, and fine-tune Large Language Models (LLMs) for various in-house applications. Implement and optimize Retrieval-Augmented Generation (RAG) techniques to enhance AI response quality. Develop and deploy Agentic AI systems capable of autonomous decision-making and task execution. Build and manage data pipelines for processing, transforming, and feeding structured/unstructured data into AI models. Ensure scalability, performance, and security of AI-driven solutions in production environments. Collaborate with cross-functional teams, including data engineers, software developers, and product managers. Conduct experiments and evaluations to improve AI system accuracy and efficiency. Stay updated with the latest advancements in AI/ML research, open-source models, and industry best practices. & : Strong experience in LLM fine-tuning using frameworks like Hugging Face, DeepSpeed, or LoRA/PEFT. Hands-on experience with RAG architectures, including vector databases (e.g., Pinecone, ChromaDB, Weaviate, OpenSearch, FAISS). Experience in building AI agents using LangChain, LangGraph, CrewAI, AutoGPT, or similar frameworks. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow. Experience in Python web frameworks such as FastAPI, Django, or Flask. Experience in designing and managing data pipelines using tools like Apache Airflow, Kafka, or Spark. Knowledge of cloud platforms (AWS/GCP/Azure) and containerization technologies (Docker, Kubernetes). Familiarity with LLM APIs (OpenAI, Anthropic, Mistral, Cohere, Llama, etc.) and their integration in applications. Strong understanding of vector search, embedding models, and hybrid retrieval techniques. Experience with optimizing inference and serving AI models in real-time production systems. -- : Experience with multi-modal AI (text, image, audio). Familiarity with privacy-preserving AI techniques and responsible AI frameworks. Understanding of MLOps best practices, including model versioning, monitoring, and deployment automation. Role & responsibilities Preferred candidate profile

Posted 1 month ago

Apply

8.0 - 13.0 years

25 - 35 Lacs

Bengaluru

Remote

hands-on- code architecture Robotic Process Automation Metaverse system E-commerce System IOT Solution Architecture Definition Design Implementation Supervision Design Patterns API Database Python Java Devops Agile Telco 2.0 Call Milan 7021504388 Required Candidate profile Exp- Telco /ICT industry solution architecture, design skills microservices, monolithic, event-driven cloud services and concepts Strategy & Business Design scalable applications architecture OSS/BSS

Posted 1 month ago

Apply

5.0 - 8.0 years

11 - 16 Lacs

Pune

Work from Office

Role Description Our agile development team is looking for an experienced Java-based Middle-Tier developer to help build our data integration layer utilizing the latest tools and technologies. In this critical role you will become part of a motivated and talented team operating within a creative environment. You should have a passion for writing and designing Server-Side, cutting edge applications, that push the boundaries of what is possible and exists within the bank today. Your key responsibilities Your Role - What Youll Do As a Java Microservices engineer you would be responsible for designing, developing and maintaining scalable microservices using Java & Spring Boot. You will collaborate with cross-functional teams to deliver the features/enhancements in time by ensuring code quality and support the overall business requirements. Key Responsibilities: Develop and maintain scalable and reliable microservices using Java, Spring Boot and related technologies. Implement RESTful APIs and support integrations with other systems. Collaborate with various stakeholders QA, DevOps, PO and Architects to ensure the business requirements are met. Participate in code reviews, troubleshooting and mentoring junior members. Your skills and experience Skills Youll Need: Must Have: Overall experience of 5+ years with hands-on coding/engineering skills extensively in Java technologies and microservices. Strong understanding of Microservices architecture, patterns and practices. Proficiency in Spring Boot, Spring Cloud, development of REST APIs Desirable skills that will help you excel Prior experience working in Agile/scum environment. Good understanding of containerization (Docker/Kubernetes), databases (SQL & No SQL), Build tools (Maven/Gradle). Knowledge of the Architecture and Design Principles, Algorithms and Data Structures, and UI. Exposure to cloud platforms is a plus (preferably GCP). Knowledge of Kafka, RabbitMQ etc., would be a plus. Strong problem solving and communications skills. Working knowledge of GIT, Jenkins, CICD, Gradle, DevOps and SRE techniques Educational Qualifications Bachelors degree in Computer Science/Engineering or relevant technology & science Technology certifications from any industry leading cloud providers

Posted 1 month ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Gurugram, Bengaluru

Hybrid

We are seeking an experienced Python API Developer with 5-8 years of experience to join our engineering team. In this role, you will lead the design and development of secure, scalable APIs and microservices to power mission-critical applications. Youll work closely with product managers, architects, and cross-functional teams to deliver high-performance backend solutions. Key Responsibilities: Design, build, and maintain scalable and secure RESTful APIs and microservices. Develop and maintain integration workflows with third-party platforms, cloud services, and internal systems. Collaborate with product managers, architects, and developers to understand business and technical requirements. Optimize backend systems for performance and scalability in high-throughput environments. Implement and maintain API security protocols such as OAuth2.0 and JWT. Write reusable, testable, and efficient Python code following industry best practices. Lead code reviews and provide mentorship to junior developers. Troubleshoot and resolve bugs and production issues in a timely manner. Maintain technical documentation for all API interfaces and backend processes. Required Skills & Experience: 5-8 years of strong hands-on experience with Python (Django, Flask, or FastAPI). Proven expertise in designing and building RESTful APIs and microservices. Familiarity with API documentation tools and specifications like Swagger/OpenAPI. Solid knowledge of authentication mechanisms including OAuth2.0, JWT. Experience in integrating third-party services such as CRMs, payment gateways, and cloud services. Proficiency in relational and non-relational databases like PostgreSQL, MySQL, MongoDB. Exposure to messaging and queuing tools like RabbitMQ, Kafka, or Celery is a plus. Experience with containerization and deployment tools such as Docker and CI/CD pipelines. Familiarity with cloud platforms (AWS, Azure, or GCP). Understanding of Agile development and DevOps best practices.

Posted 1 month ago

Apply

3.0 - 5.0 years

5 - 7 Lacs

Hyderabad

Work from Office

Skills (Must have): 3+ years of DevOps experience. Expertise in Kubernetes, Docker, and CI/CD tools (Jenkins, GitLab CI). Hands-on with config management tools like Ansible, Puppet, or Chef. Strong knowledge of cloud platforms (AWS, Azure, or GCP). Proficient in scripting (Bash, Python). Good troubleshooting, analytical, and communication skills. Willingness to explore frontend tech (ReactJS, NodeJS, Angular) is a plus. Skills (Good to have): Experience with Helm charts and service meshes (Istio, Linkerd). Experience with monitoring and logging solutions (Prometheus, Grafana, ELK). Experience with security best practices for cloud and container environments. Contributions to open-source projects or a strong personal portfolio. Role & Responsibility: Manage and optimize Kubernetes clusters, including deployments, scaling, and troubleshooting. Develop and maintain Docker images and containers, ensuring security best practices. Design, implement, and maintain cloud-based infrastructure (AWS, Azure or GCP) using Infrastructure-as-Code (IaC) principles (e.g., Terraform). Monitor and troubleshoot infrastructure and application performance, proactively identifying and resolving issues. Contribute to the development and maintenance of internal tools and automation scripts. Qualification: B.Tech/B.E./M.E./M.Tech in Computer Science or equivalent. Additional Information: We offer a competitive salary and excellent benefits that are above industry standard. Do check our impressive growth rate on and ratings on Please submit your resume in this standard 1-page or 2-page Please hear from our employees on Colleagues Interested in Internal Mobility, please contact your HRBP in-confidence

Posted 1 month ago

Apply

3.0 - 8.0 years

3 - 8 Lacs

Bengaluru, Karnataka, India

Remote

We are seeking India's top-tier Data Architects to join an elite community of professionals solving complex AI challenges. If you have deep expertise in designing, building, and optimizing data pipelines and platforms, this is your opportunity to collaborate with industry leaders and shape the future of intelligent systems. What's in It for You Pay above market standards Flexible contract durations (212 months) Remote-first with optional onsite roles Join a high-impact community of AI infrastructure experts Responsibilities Design and architect enterprise-scale data platforms integrating diverse data sources and tools Build real-time and batch data pipelines to support analytics and machine learning workloads Define and enforce data governance strategies to ensure security, integrity, and compliance Optimize pipelines for performance, scalability, and cost efficiency in cloud environments Implement real-time streaming solutions (Kafka, AWS Kinesis, Apache Flink) Adopt and promote DevOps/DataOps best practices Required Skills Proven experience designing scalable, distributed data systems Proficiency in Python, Scala, or Java Expertise in Apache Spark, Hadoop, Flink, Kafka, and cloud platforms (AWS, Azure, GCP) Strong knowledge of data modeling, governance, and warehousing (Snowflake, Redshift, BigQuery) Familiarity with compliance standards (GDPR, HIPAA) Hands-on experience with CI/CD (Terraform, CloudFormation, Airflow, Kubernetes) Experience with monitoring and optimization tools (Prometheus, Grafana) Nice to Have Experience with graph databases, ML pipeline integration, real-time analytics, or IoT Contributions to open-source data engineering communities

Posted 1 month ago

Apply

6.0 - 11.0 years

9 - 13 Lacs

Mohali

Work from Office

We are seeking a motivated AI/ML and Python Developer with 6 years + of experience to join our team. The ideal candidate will have a strong foundation in AI/ML technologies, Python programming, and a passion for solving complex problems. Key Responsibilities: Develop, deploy, and maintain AI/ML models and solutions. Integrate and optimize Large Language Models (LLMs) and Generative AI (Gen AI) technologies. Design and implement AI agents for automation and decision-making tasks. Collaborate with cross-functional teams to deliver scalable AI-driven solutions. Technical Requirements: Proficiency in Python and AI/ML libraries (e.g., TensorFlow, PyTorch, Scikit-learn). Experience with Gen AI, LLM integration (e.g., OpenAI GPT, Hugging Face), and AI agent development. Familiarity with data preprocessing, model training, and evaluation techniques. Knowledge of REST APIs, cloud platforms (e.g., AWS, GCP), and version control (e.g., Git). Understanding of NLP , computer vision, or reinforcement learning is a plus. Preferred Qualifications: Hands-on experience with Gen AI tools and frameworks. Must Have Good Command over LLM - NLP Techniques Proven ability to work on end-to-end AI/ML projects. Strong problem-solving and communication skills.

Posted 1 month ago

Apply

4.0 - 7.0 years

4 - 7 Lacs

Bengaluru, Karnataka, India

On-site

Senior Machine Learning Engineer - NLP/Python We are seeking a highly skilled Senior Machine Learning Engineer to join our team and contribute to cutting-edge projects. As a key member of our data science team, you will leverage your expertise in machine learning, natural language processing, and data engineering to develop innovative solutions that drive business value. Key Responsibilities Machine learning model development is a primary responsibility, involving the design, development, and deployment of advanced machine learning models, with a focus on natural language processing tasks such as text classification, sentiment analysis, and language generation. Data engineering skills will be applied to extract, transform, and load (ETL) data from various sources, ensuring data quality and consistency. NLP techniques will be applied, including state-of-the-art language models and text processing, to solve complex problems. Python programming language and popular ML frameworks like PyTorch or TensorFlow will be utilized to build efficient and scalable models. Cloud platforms (GCP, AWS) and containerization technologies (Docker) will be leveraged for efficient deployment and management of ML models. Relational databases (Postgres, MySQL) will be used to store, manage, and query large datasets. Knowledge graph development will be contributed to, and the latest advancements in the field will be kept updated through research and publications. Technical Skill Requirements Proficiency in Python programming language is mandatory. Deep understanding of machine learning algorithms and techniques is essential. Expertise in NLP , including language models and text processing, is required. Familiarity with ML frameworks like PyTorch or TensorFlow is necessary. Experience with cloud platforms (GCP, AWS) and containerization (Docker) is a must. Knowledge of relational databases (Postgres, MySQL) is required. General Requirements Employment Type: This is a Full-Time, Permanent position. Notice Period: Immediate - 15 Days. Strong problem-solving and analytical skills are essential. Excellent communication and collaboration abilities are vital. Ability to work independently and as part of a team is expected.

Posted 1 month ago

Apply

4.0 - 5.0 years

4 - 5 Lacs

Bengaluru, Karnataka, India

On-site

DevOps Engineer - Python We're looking for a highly motivated and skilled DevOps Engineer with strong Python programming skills to join our team in Bengaluru. You'll be responsible for automating and streamlining our software development and deployment processes, ensuring efficient and reliable software delivery. Key Responsibilities CI/CD pipeline development and maintenance using tools like Jenkins, GitLab CI/CD, or Azure DevOps. Infrastructure automation for provisioning and management using tools such as Terraform, Ansible, or Puppet. Python script development and maintenance for various DevOps tasks, including: Automating deployments. Monitoring and alerting. Data analysis and reporting. System administration tasks. Troubleshooting and resolution of infrastructure and deployment issues. Collaboration with development teams to improve software delivery processes. Staying abreast of the latest DevOps tools, technologies, and best practices . Technical Skill Requirements Strong Python programming skills are mandatory. Experience with CI/CD pipelines and tools (Jenkins, GitLab CI/CD, Azure DevOps) is required. Experience with infrastructure automation tools (Terraform, Ansible, Puppet) is essential. Experience with cloud platforms (AWS, Azure, GCP) is a must. Experience with containerization technologies (Docker, Kubernetes) is required. Experience with scripting languages (Bash, Shell) is necessary. Strong understanding of Linux/Unix systems is vital. Excellent problem-solving and analytical skills are crucial. Strong communication and collaboration skills are essential. General Requirements Employment Type: This is a Full-Time, Permanent position. Desired Skills (Optional) Experience with monitoring and logging tools (Prometheus, Grafana, ELK stack). Experience with configuration management tools (Chef, SaltStack). Experience with security best practices and tools .

Posted 1 month ago

Apply

8.0 - 10.0 years

8 - 10 Lacs

Navi Mumbai, Maharashtra, India

On-site

Deployment & Integration Engineer - Cloud Platform A highly skilled Deployment and Integration Engineer is sought, specializing in MCPTX/IMS within cloud platform environments. This role requires extensive expertise in telecommunications operations and deployment, particularly with IMS applications. You will be responsible for overseeing deployments, ensuring the health and maintenance of critical network elements, and providing advanced troubleshooting. Key Responsibilities Cloud platform operation and deployment skills are essential for overseeing IMS application operations and deployment in cloud platform environments, including public cloud. IMS application maintenance abilities are critical for regular health checks and maintenance of IMS applications and their associated servers. Backup and restore capabilities for IMS applications are required. Issue escalation proficiency is necessary for escalating unresolved issues to vendors or customers, providing clear issue descriptions, logs, findings, and troubleshooting steps. Mission-critical network element management skills are vital for effectively managing these components. New feature implementation and system software & application patch upgrade as per plan are key responsibilities. Level-2 troubleshooting expertise is required for issues observed or escalated. Method of Procedure (MOP) creation is an important aspect of the role. Coordination with technical support teams will be provided, ensuring effective follow-up. Technical Skill Requirements Deep knowledge of IMS network and functionality of products like PCRF, SBC, CSCF, TAS is mandatory. Familiarity with IMS, VoLTE call flows, LTE network, and interfaces is required. Strong IP network knowledge is essential. Good troubleshooting skills are a must. Familiarity with all troubleshooting tools is necessary. Experience with IMS testing is required. Excellent hands-on experience on Linux Container-based architecture on various commercial platforms such as Redhat OpenShift, Openstack, or VMware is desired. Any experience with public cloud platforms like AWS/Azure is an added plus. Good Communication skills and willingness to travel and experience about customer network are vital. Education A Bachelor's or Master's degree in Computer Science or Electronics & Communication is required. General Requirements Employment Type: This is a Full-Time, Permanent position. Notice Period: Immediate - 15 days. Should be flexible to working in shifts and Customer time zones . Excellent written and verbal communication skills with ability to interact effectively at various organizational levels.

Posted 1 month ago

Apply

6.0 - 11.0 years

6 - 11 Lacs

Bengaluru, Karnataka, India

On-site

Backend Developer - Node.js/Express.js As a Backend Developer, you'll collaborate with the development team to build and maintain scalable, secure, and high-performing backend systems for SaaS products. You'll play a key role in designing and implementing microservices architectures, integrating databases, and ensuring seamless operation of cloud-based applications. Key Responsibilities Backend solution design, development, and maintenance using modern frameworks and tools are core to this role. Microservices architecture creation, management, and optimization, ensuring efficient communication between services, are essential. RESTful API development and integration to support frontend and third-party systems are required. Database schema design and implementation, along with performance optimization for SQL and NoSQL databases, are critical. Support for deployment processes by aligning backend development with CI/CD pipeline requirements is necessary. Security best practices implementation, including authentication, authorization, and data protection, is a key responsibility. Collaboration with frontend developers to ensure seamless integration of backend services is expected. Application performance, scalability, and reliability monitoring and enhancement are ongoing tasks. Staying up-to-date with emerging technologies and industry trends to improve backend practices is crucial for continuous improvement. Technical Skill Requirements Proven experience as a Backend Developer with expertise in modern frameworks such as Node.js, Express.js, or Django is mandatory. Expertise in .NET frameworks, including development in C++ and C# for high-performance databases, is required. Strong proficiency in building and consuming RESTful APIs is essential. Expertise in database design and management with both SQL (e.g., PostgreSQL, MS SQL Server) and NoSQL (e.g., MongoDB, Cassandra) databases is a must. Hands-on experience with microservices architecture and containerization tools like Docker and Kubernetes is necessary. Strong understanding of cloud platforms like Microsoft Azure, AWS, or Google Cloud for deployment, monitoring, and management is required. Proficiency in implementing security best practices (e.g., OAuth, JWT, encryption techniques) is essential. Experience with CI/CD pipelines and tools such as Jenkins, GitHub Actions, or Azure DevOps is required. Familiarity with Agile methodologies and participation in sprint planning and reviews is necessary. Education A Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field is required. General Requirements Employment Type: This is a Full Time, Permanent position. Strong problem-solving and analytical skills are essential. Exceptional organizational skills with the ability to manage multiple priorities are required. Adaptability to evolving technologies and industry trends is expected. Excellent collaboration and communication skills to work effectively in cross-functional teams are vital. Ability to thrive in self-organizing teams with a focus on transparency and trust is crucial. Preferred Skills Experience with time-series databases like TimescaleDB or InfluxDB is a plus. Experience with monitoring solutions like Datadog or Splunk is beneficial. Experience with real-time data processing frameworks like Kafka or RabbitMQ is desired. Familiarity with serverless architecture and tools like Azure or AWS Lambda Functions is a plus. Expertise in Java backend services and microservices is an asset. Hands-on experience with business intelligence tools like Grafana or Kibana for monitoring and visualization is preferred. Knowledge of API management platforms like Kong or Apigee is beneficial. Experience with integrating AI/ML models into backend systems is a plus. Familiarity with MLOps pipelines and managing AI/ML workloads is desirable. Understanding of iPaaS (Integration Platforms as a Service) and related technologies is beneficial.

Posted 1 month ago

Apply

6.0 - 11.0 years

6 - 11 Lacs

Bengaluru, Karnataka, India

On-site

Senior Full Stack Developer You will work with our development and production support team on efforts to support growth initiatives. This Full Stack Developer will play a pivotal role in developing leading SaaS products for energy power market intelligence, providing insights and data-driven solutions to stakeholders in the energy industry. With expertise in microservices, APIs, AI/ML integration, and cloud technologies, you will be instrumental in driving the evolution of our software applications. Your ability to work across the stack, from frontend to backend, and integrate cutting-edge technologies will be essential in delivering innovative and high-quality solutions. Collaborating with cross-functional teams, including data scientists, UX designers, and DevOps engineers, you will play a key role in revamping legacy applications, building microservices, and infusing AI/ML capabilities into web user interfaces. Key Responsibilities Scalable and responsive web application design and development skills using modern frontend and backend technologies will be applied. Leadership in revamping legacy applications is essential, ensuring modernization, improved performance, and enhanced user experience. Microservices architecture creation and management , including API design, development, and integration, are crucial. Collaboration with data scientists is required to integrate AI/ML capabilities into web user interfaces for predictive analytics and data-driven insights. SQL and NoSQL database integration proficiency will be used, optimizing data storage and retrieval for efficient application performance. Cloud platform deployment, monitoring, and management skills with Azure or other relevant providers are necessary. DevOps practices implementation for CI/CD and automated testing is a key responsibility. Collaboration with UX/UI designers will ensure the creation of visually appealing and user-friendly interfaces. Issue troubleshooting and debugging expertise will be applied to identify root causes and implement effective solutions. Staying updated with industry trends and emerging technologies is expected to drive innovation in application development. Onboarding and training junior development staff will be a part of the role. Other duties may be assigned as needed. Technical Skill Requirements Strong proficiency in web development technologies , including HTML, CSS, JavaScript, and modern frontend frameworks (e.g., React, Angular, Vue), is mandatory. Deep knowledge of .NET languages and servers is required. Experience in designing and implementing microservices architecture, RESTful APIs, and integration patterns is essential. Proficiency in both SQL and NoSQL databases and their integration into applications is a must. Experience administrating and integrating with cloud platforms such as Azure, AWS, or Google Cloud Platform is required. Proven experience in successfully revamping and modernizing legacy applications is essential. Experience with Agile methodologies and participation in sprint planning and review meetings is necessary. Familiarity with integrating AI/ML capabilities into web user interfaces for data visualization and insights is a plus. Knowledge of DevOps practices, CI/CD pipelines, and automated testing is required. Familiarity with MLOps methodologies and best practices is beneficial. Mandatory skills include JavaScript, React & Redux, Node, Express & .NET, SQL databases, CI/CD, MobX, RTK (Redux Tool Kit), and Zustand . Education A Bachelor's or Master's degree in Computer Science, Software Engineering, or a related field is required. General Requirements Employment Type: This is a Permanent, full-time position. Ability to deliver in self-organizing teams with high levels of trust and transparency is crucial. Strong problem-solving skills and ability to troubleshoot and debug complex issues are essential. Strong organizational skills and ability to manage multiple projects and priorities are required. Adaptability to evolving technology landscapes and industry trends is expected. Excellent collaboration and verbal/written communication skills , with the ability to work effectively in cross-functional teams, are vital. Preferred Skills (Assets) Experience integrating with Snowflake, Databricks, or other data lake technologies is a desirable asset. Experience utilizing, extending, and integrating business intelligence tools like Grafana is a plus. Experience integrating with iPaaS (Integration Platforms as a Service) is beneficial. Experience leveraging graphs for modeling data and organizing metadata, semantics is valued. Performance Metrics Success in this position will be measured against the following groups of Metrics: Development Performance: Sustain and maintain an acceptable pace of development according to sprint plans and backlog items, as per direction by the product manager and owner. Deliverables Quality: The quality of deliverables, including proper documentation for handover to other groups, will be critical for success and scalability. Application Performance: Measure responsiveness and efficiency in real-world usage. Legacy Application Modernization: Track the progress of legacy application revamping and performance improvements. AI/ML Integration Success: Monitor the successful integration of AI/ML capabilities into web interfaces. Microservices Architecture: Measure the efficiency and scalability of the microservices architecture. DevOps Efficiency: Measure the effectiveness of CI/CD pipelines and automated testing in the development process.

Posted 1 month ago

Apply

8.0 - 10.0 years

8 - 10 Lacs

Noida, Uttar Pradesh, India

On-site

Deployment & Integration Engineer - Cloud Platform A highly skilled Deployment and Integration Engineer is sought, specializing in MCPTT/IMS within cloud platform environments. This role requires extensive expertise in telecommunications operations and deployment, particularly with IMS applications. You will be responsible for overseeing deployments, ensuring the health and maintenance of critical network elements, and providing advanced troubleshooting. Key Responsibilities Cloud platform operation and deployment skills are essential for overseeing IMS application operations and deployment in cloud platform environments, including public cloud. IMS application maintenance abilities are critical for regular health checks and maintenance of IMS applications and their associated servers. Backup and restore capabilities for IMS applications are required. Issue escalation proficiency is necessary for escalating unresolved issues to vendors or customers, providing clear issue descriptions, logs, findings, and troubleshooting steps. Mission-critical network element management skills are vital for effectively managing these components. New feature implementation and system software & application patch upgrade as per plan are key responsibilities. Level-2 troubleshooting expertise is required for issues observed or escalated. Method of Procedure (MOP) creation is an important aspect of the role. Coordination with technical support teams will be provided, ensuring effective follow-up. Technical Skill Requirements Deep knowledge of IMS network and functionality of products like PCRF, SBC, CSCF, TAS is mandatory. Familiarity with IMS, VoLTE call flows, LTE network, and interfaces is required. Strong IP network knowledge is essential. Excellent troubleshooting skills are a must. Familiarity with all troubleshooting tools is necessary. Experience with IMS testing is required. Excellent hands-on experience on Linux Container-based architecture on various commercial platforms such as Redhat OpenShift, Openstack, or VMware is desired. Any experience with public cloud platforms like AWS/Azure is an added plus. Education A Bachelor's or Master's degree in Computer Science or Electronics & Communication is required. General Requirements Employment Type: This is a Full-Time, Permanent position. Communication skills are excellent, with the ability to interact effectively at various organizational levels. Willingness to travel and experience with customer networks is required. Flexibility to work in shifts and customer time zones is essential. Notice Period: Immediate - 15 days.

Posted 1 month ago

Apply

4.0 - 7.0 years

4 - 7 Lacs

Bengaluru, Karnataka, India

On-site

Maimsd Technology is seeking a highly skilled and innovative Senior Machine Learning Engineer to join our team. This role is crucial for contributing to cutting-edge projects by leveraging expertise in machine learning, natural language processing, and data engineering. The Senior ML Engineer will be responsible for designing, developing, and deploying advanced ML models, particularly focusing on NLP tasks, to deliver innovative solutions that drive significant business value. Key Responsibilities: Machine Learning Model Development: Design, develop, and deploy advanced machine learning models . Specialize in natural language processing (NLP) tasks such as text classification, sentiment analysis, and language generation. Data Engineering: Perform Extract, Transform, and Load (ETL) operations to process data from various sources. Ensure high data quality and consistency throughout the data pipeline. NLP Techniques: Apply state-of-the-art NLP techniques , including advanced language models and robust text processing methodologies, to solve complex business problems. Python and ML Frameworks: Utilize Python programming language extensively for development. Work proficiently with popular Machine Learning frameworks such as PyTorch or TensorFlow to build efficient and scalable models. Cloud and Containerization: Leverage leading cloud platforms (GCP, AWS) for model deployment and management. Utilize containerization technologies (Docker) for efficient deployment and management of ML models. Database Management: Work with relational databases (Postgres, MySQL) to store, manage, and query large datasets effectively. Knowledge Graphs and ML Publications: Contribute to the development and enhancement of knowledge graphs . Stay updated with the latest advancements in the field of machine learning and data science through continuous research and engagement with publications. Qualifications: Experience: 4-7 years of hands-on experience in data science, with a strong and proven focus on machine learning and natural language processing . Technical Skills: Proficiency in Python programming language . Deep understanding of machine learning algorithms and techniques . Expertise in NLP , including various language models and text processing methods. Familiarity with prominent ML frameworks like PyTorch or TensorFlow. Experience with cloud platforms (GCP, AWS) and containerization (Docker) . Knowledge of relational databases (Postgres, MySQL) . Soft Skills: Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Ability to work both independently and effectively as part of a team. Benefits: Competitive salary and comprehensive benefits package. Significant opportunities for professional growth and development within a cutting-edge field. A collaborative and supportive work environment that fosters innovation.

Posted 1 month ago

Apply

1.0 - 4.0 years

1 - 4 Lacs

Hyderabad, Telangana, India

On-site

Design and deploy system solutions to meet business requirements. Monitor and maintain system performance, security, and reliability. Troubleshoot and resolve system issues, ensuring minimal downtime. Required Qualifications: 2+ years of experience in system engineering. Strong proficiency with operating systems (Linux, Windows, etc.) and cloud platforms (AWS, Azure, etc.). Experience with system monitoring, configuration management, and security practices.

Posted 1 month ago

Apply

8.0 - 10.0 years

15 - 18 Lacs

Guntur, Hyderabad

Work from Office

Job Title: Senior Devops Engineer Location: Hyderabad/Guntur Contract Type: Full-time Time Zone: Willing to work in UK Time Zone Job Description: We are seeking a highly skilled Senior DevOps Engineer to join our team and lead the design, implementation, and maintenance of scalable DevOps infrastructure. The ideal candidate will have strong experience in CI/CD pipelines, cloud platforms (AWS/Azure/GCP), containerization (Docker, Kubernetes), infrastructure as code (Terraform/CloudFormation), and system automation. You will work closely with development, QA, and operations teams to ensure efficient delivery, high availability, and performance of applications. Key Responsibilities: Design and manage CI/CD pipelines for automated deployment Implement and maintain infrastructure using IaC tools Monitor and improve system performance, availability, and scalability Troubleshoot and resolve infrastructure and deployment issues Ensure security, compliance, and cost optimization in cloud environments Mentor junior DevOps engineers and collaborate across teams Requirements: 8+ years of DevOps experience Proficiency with tools like Jenkins, Git, Docker, Kubernetes, Terraform, and Ansible Hands-on experience with AWS, Azure, or GCP Strong scripting skills (Bash, Python, or similar) Solid understanding of networking, security, and system administration

Posted 1 month ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Assistant Vice President, Program Manager for Data Engineering and Cloud Platform Programs We are seeking a highly experienced and dynamic Delivery Lead/Program Manager to join our IT services organization. This senior leadership role will be responsible for managing the delivery of projects around data engineering and cloud platforms. The ideal candidate will have a proven track record in managing large and complex programs, along with exceptional leadership skills to manage clients, teams, delivery governance, and drive innovation. Responsibilities: Plan, organize and manage large-scale multimillion dollar Programs from Start to Finish. Define and enforce Program delivery governance frameworks , best practices, and methodologies. Act as the primary interface for clients , ensuring strong relationships and alignment with their strategy. Oversee the delivery of data engineering and cloud platform projects ensuring they are completed on time, within budget, and to the highest quality standards. Implement best practices in project management methodologies such as Agile. Build, mentor, and manage a high-performing team of IT professionals including developers, engineers, analysts, and support staff. Foster a collaborative environment that encourages continuous learning and development. Monitor overall Program progress and ensure alignment with Organizational goals Drive innovation within the team by staying updated with the latest trends in technology. Encourage creative solutions to complex problems. Collaborate with senior leadership to prioritize projects and allocate resources effectively. Ensure proper availability of expertise for troubleshooting major issues . Identify potential risk and develop mitigation strategies Qualifications we seek in you! Minimum Qualifications experience in IT leadership roles with exp specifically in leading Data Engineering or Cloud migration Projects. PMP, ITIL, or SAFe Agile certifications for delivery governance. Demonstrated expertise in strategic planning and execution within complex organizational environments Strong financial management skills including budgeting Required Skills Proven experience in a senior leadership role within information technology Exceptional project management skills with a successful track record of delivering complex technology projects Extensive experience managing large-scale programs using tools like JIRA, Trello, MS Project. Strong technical expertise in data engineering, cloud platforms (such as AWS, Azure, GCP ) , system administration, network management Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

0.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Assistant Vice President, Program Manager for Data Engineering and Cloud Platform Programs We are seeking a highly experienced and dynamic Delivery Lead/Program Manager to join our IT services organization. This senior leadership role will be responsible for managing the delivery of projects around data engineering and cloud platforms. The ideal candidate will have a proven track record in managing large and complex programs, along with exceptional leadership skills to manage clients, teams, delivery governance, and drive innovation. Responsibilities: Plan, organize and manage large-scale multimillion dollar Programs from Start to Finish. Define and enforce Program delivery governance frameworks , best practices, and methodologies. Act as the primary interface for clients , ensuring strong relationships and alignment with their strategy. Oversee the delivery of data engineering and cloud platform projects ensuring they are completed on time, within budget, and to the highest quality standards. Implement best practices in project management methodologies such as Agile. Build, mentor, and manage a high-performing team of IT professionals including developers, engineers, analysts, and support staff. Foster a collaborative environment that encourages continuous learning and development. Monitor overall Program progress and ensure alignment with Organizational goals Drive innovation within the team by staying updated with the latest trends in technology. Encourage creative solutions to complex problems. Collaborate with senior leadership to prioritize projects and allocate resources effectively. Ensure proper availability of expertise for troubleshooting major issues . Identify potential risk and develop mitigation strategies Qualifications we seek in you! Minimum Qualifications experience in IT leadership roles with exp specifically in leading Data Engineering or Cloud migration Projects. PMP, ITIL, or SAFe Agile certifications for delivery governance. Demonstrated expertise in strategic planning and execution within complex organizational environments Strong financial management skills including budgeting Required Skills Proven experience in a senior leadership role within information technology Exceptional project management skills with a successful track record of delivering complex technology projects Extensive experience managing large-scale programs using tools like JIRA, Trello, MS Project. Strong technical expertise in data engineering, cloud platforms (such as AWS, Azure, GCP ) , system administration, network management Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

0.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Assistant Vice President Enterprise Architecture Consulting-AWS Delivery lead The Delivery Lead will be responsible for the successful execution of large-scale data transformation projects leveraging the AWS platform. This leadership role involves overseeing both legacy-to-AWS migrations and new implementations, ensuring high-quality delivery, innovation, and business value. The ideal candidate should have extensive experience in AWS program management, along with expertise in data engineering, cloud platforms, and analytics solutions. They will be responsible for client engagement, team leadership, delivery governance, and strategic innovations in AWS-based solutions. Responsibilities: Lead end-to-end delivery of AWS data platform projects , including migrations from legacy systems and greenfield implementations . Define and enforce delivery governance frameworks , best practices, and methodologies for AWS programs. Act as the primary interface for clients , ensuring strong relationships and alignment with their data strategy. Provide thought leadership on AWS and modern data architectures , guiding clients on best practices. Build, mentor, and manage a high-performing team of AWS architects, data engineers, and analysts. Drive team upskilling and certifications in AWS, data engineering, and analytics tools. Foster a strong DevOps and Agile culture , ensuring efficient execution through CI/CD automation. Stay ahead of emerging trends in AWS, cloud data engineering, and analytics to drive innovation. Promote AI/ML, automation, and real-time analytics to enhance data platform capabilities. Develop accelerators, reusable frameworks, and best practices for efficient AWS delivery. Ensure data security, compliance, and regulatory adherence in AWS-based projects. Implement performance monitoring, cost optimization, and disaster recovery strategies for AWS solutions. Qualifications we seek in you! Minimum Qualifications Bachelor&rsquos degree in Computer Science , Engineering, or a related field (Master&rsquos or MBA preferred). IT services with experience specifically in AWS and cloud-based data engineering. Preferred Qualifications/ Skills Proven track record in managing large-scale AWS programs , including legacy data migrations and new implementations. Deep understanding of data engineering, ETL, and cloud-native architectures. Strong expertise in AWS ecosystem, including Streams, Tasks, Data Sharing, and Performance Optimization. Experience with cloud platforms (Azure, GCP). Proficiency in SQL, Python, Spark, and modern data processing frameworks. Preferred Certifications: AWS Certified Solutions Architect Cloud certifications (Azure Data Engineer, Google Cloud Architect or equivalent). PMP, ITIL, or SAFe Agile certifications for delivery governance. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies