Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Cloud Engineer II Full-time McDonald's Office Location: Hyderabad Global Grade: G3 Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer II role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer II will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Work well in an agile environment Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 5+ years of Information Technology experience for a large technology company, preferably in a platform team. 4+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 3+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 3+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less
Posted 3 hours ago
130.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Associate Director, Technical Architect Data and Analytics – Enterprise Data Enablement THE OPPORTUNITY Based in Hyderabad, join a global healthcare biopharma company and be part of a 130-year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Lead an Organization driven by digital technology and data-backed approaches that supports a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be the leaders who have a passion for using data, analytics, and insights to drive decision-making, which will allow us to tackle some of the world's greatest health threats. Our Technology centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. AN integral part of our company IT operating model, Tech centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each tech center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview As a Lead Technical Data and Analytics Architect with a primary focus on Enterprise Data Enablement and Data Governance, you will play a pivotal leadership role in shaping the future of our company enterprise data enablement and governance initiatives. This position combines strategic technology leadership with hands-on technical expertise. You will be supporting our Discover product line, which encompasses the Enterprise Data Marketplace, Data Catalog, and Enterprise Data Access Control products. This role is pivotal in understanding the current architecture, adoption patterns, and product strategy while helping to design the architecture for the next generation of Discover. You will create and implement strategic frameworks, ensure their adoption within product teams, and oversee the consistent management of technologies. You will work closely with the product team to establish and govern the future architecture, ensuring it evolves beyond traditional data products to include AI models, visualizations, insights assets, and more. You will play a key role in driving innovation, modularity, and scalability within the Discover ecosystem, aligning with the organization's strategic vision. What Will You Do In The Role Strategic Leadership Develop and maintain a cohesive Data Enablement architecture vision, aligning with our company's business objectives and industry trends. Provide leadership to a team of product owners and engineers in our Discover Product line, mentoring and guiding them to achieve collective goals and deliverables. Foster a collaborative environment where innovation and best practices thrive. Integration and Innovation Design and implement architectural solutions to enable seamless integration between Enterprise Data Marketplace, Data Catalog, and Enterprise Data Access Control products. Enhance API usage and drive the transition to a microservice-based architecture for greater modularity and scalability. Support the integration of Collibra and Immuta platforms with compute engines like Glue, Trino Starburst, and Databricks to optimize Discover’s capabilities. Technical Leadership and Collaboration Collaborate with cross-functional teams, including engineering, product management, and other stakeholders, to align on architecture strategy and implementation. Partner with the product team to define roadmaps and ensure architectural alignment with the organization's goals. Act as a trusted advisor, providing technical leadership and driving best practices for architectural governance. Governance and Security Ensure all architectural designs adhere to organizational policies, data governance requirements, and security standards. Evolve data governance practices to accommodate diverse assets, including AI models and visualizations, alongside traditional data products. Optimization and Future-Readiness Identify opportunities for system optimization, modernization, and cost-efficiency. Lead initiatives to future-proof the architecture, supporting scalability for increasing demands across data products and advanced analytics. Framework Development and Governance Create capability and technology maps for Data Enablement and Governance, reference architectures, innovation trend maps, and architecture blueprints and patterns. Ensure the consistent application of frameworks across product teams. Hands-on Contribution Actively participate in technical problem-solving, proof-of-concept development, and implementation activities. Provide hands-on technical leadership to support your team and deliver high-value outcomes. Cross-functional Collaboration Partner with enterprise and product architects to ensure alignment and synergy across the organization. Engage with stakeholders to align architectural decisions with broader business goals. Collaborate with internal Strategy and Architecture team Architecture lead and Architects to ensure the smooth integration of Data Enablement Technologies with other Data and Analytics eco system products What Should You Have Hands-on experience with platforms like Collibra, Immuta, and Databricks, and deep knowledge of data governance and access control frameworks. Strong understanding of architectural principles, API integration strategies, and microservice-based design Proficiency in designing modular, scalable architectures that align with data product and data mesh principles. Expertise in supporting diverse asset types, including AI models, visualizations, and insights assets, within enterprise ecosystems. Knowledge of cloud platforms (AWS preferred) and containerization technologies (Docker, Kubernetes). Proven ability to align technical solutions with business objectives and strategic goals. Strong communication skills, with the ability to engage and influence technical and non-technical stakeholders. Exceptional problem-solving and analytical skills, with a focus on practical, future-ready solutions. Self-driven and adaptable, capable of managing multiple priorities in a fast-paced environment. Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation Who We Are We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What We Look For Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are intellectually curious, join us—and start making your impact today. Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status Regular Relocation VISA Sponsorship Travel Requirements Flexible Work Arrangements Hybrid Shift Valid Driving License Hazardous Material(s) Required Skills Business Enterprise Architecture (BEA), Business Process Modeling, Data Modeling, Emerging Technologies, Requirements Management, Solution Architecture, Stakeholder Relationship Management, Strategic Planning, System Designs Preferred Skills Job Posting End Date 06/30/2025 A job posting is effective until 11 59 59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID R345606 Show more Show less
Posted 3 hours ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Cloud Engineer III Full-time McDonald's Office Location: Hyderabad Global Grade: G4 Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer III role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer III will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Work well in an agile environment Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Provide technical advice and support growth of junior team members Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 7+ years of Information Technology experience for a large technology company, preferably in a platform team. 6+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 5+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 5+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less
Posted 3 hours ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Cloud Engineer II Full-time McDonald's Office Location: Hyderabad Global Grade: G3 Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer II role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer II will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Work well in an agile environment Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 5+ years of Information Technology experience for a large technology company, preferably in a platform team. 4+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 3+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 3+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less
Posted 3 hours ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About McDonald’s: One of the world’s largest employers with locations in more than 100 countries, McDonald’s Corporation has corporate opportunities in Hyderabad. Our global offices serve as dynamic innovation and operations hubs, designed to expand McDonald's global talent base and in-house expertise. Our new office in Hyderabad will bring together knowledge across business, technology, analytics, and AI, accelerating our ability to deliver impactful solutions for the business and our customers across the globe. Job Description: This opportunity is part of the Global Technology Infrastructure & Operations team (GTIO), where our mission is to deliver modern and relevant technology that supports the way McDonald’s works. We provide outstanding foundational technology products and services including Global Networking, Cloud, End User Computing, and IT Service Management. It’s our goal to always provide an engaging, relevant, and simple experience for our customers. The Cloud DevOps Engineer III role is part of the Cloud Infrastructure and Operations team in Global Technology Infrastructure & Operations. The role reports to the Director of Cloud DevOps and is responsible for supporting, migrating, automation and optimization of software development and deployment process specifically for Google Cloud. The Cloud DevOps Engineer III will work closely with software developers, cloud architects, operations engineers, and other stakeholders to ensure that the software delivery process is efficient, secure, and scalable. You will support the Corporate, Cloud Security, Cloud Platform, Digital, Data, Restaurant, and Market application and product teams by efficiently and optimally delivering DevOps standards and services. This is a great opportunity for an experienced technology leader to help craft the transformation of infrastructure and operations products and services to the entire McDonalds environment. Responsibilities & Accountabilities: Participate in the management, design, and solutioning of platform deployment and operational processes. Provide direction and guidance to vendors partnering on DevOps tools standardization and engineering support. Configure and deploy reusable pipeline templates for automated deployment of cloud infrastructure and code. Proactively identify opportunities for continuous improvement Research, analyze, design, develop and support high-quality automation workflows inside and outside the cloud platform that are appropriate for business and technology strategies. Develop and maintain infrastructure and tools that support the software development and deployment process. Automate the software development and deployment process. Monitor and troubleshoot the software delivery process. Work with software developers and operations engineers to improve the software delivery process. Stay up to date on the latest DevOps practices and technologies. Drive proof of concepts and conduct technical feasibility studies for business requirements. Strive to provide internal and external customers with excellent customer service and world-class service. Effectively communicate project health, risks, and issues to the program partners, sponsors, and management teams. Resolve most conflicts between timeline, budget, and scope independently but intuitively raise complex or consequential issues to senior management. Work well in an agile environment Implement and support monitoring best practices Respond to platform and operational incidents and effectively troubleshoot and resolve issues Provide technical advice and support growth of junior team members Qualifications: Bachelor’s degree in computer science or a related field or relevant experience. 7+ years of Information Technology experience for a large technology company, preferably in a platform team. 6+ years hands-on Cloud DevOps pipeline for automating, building, and deploying microservice applications, APIs, and non-container artifacts. 5+ years working with Cloud technologies with good knowledge of IaaS and PaaS offerings in AWS & GCP. 5+ years GitHub, Jenkins, GitHub Actions, ArgoCD, Helm Charts, Harness and Artifactory or similar DevOps CI/CD tools. 3+ years of application development using agile methodology. Experience with observability tools like Datadog, New Relic and open source (O11y) observability ecosystem (Prometheus, Grafana, Jaeger) Hands-on knowledge of an Infrastructure-as-Code and associated technologies (e.g., repos, pipelines, Terraform, etc.) Advanced knowledge of "AWS" Platform preferably 3+ years AWS / Kubernetes experience or container-based technology. It is good to have experience in working on Code Quality SAST and DAST tools like SonarQube / Sonar Cloud, Veracode, Checkmarx, and Snyk. Experience developing scripts or automating tasks using languages such as Bash, PowerShell, Python, Perl, Ruby, etc. Self-starter, able to come up with solutions to problems and complete those solutions while coordinating with other teams. Knowledge of foundational cloud security principles Excellent problem-solving and analytical skills Strong communication and partnership skills Any GCP Certification. Any Agile certification preferably scaled agile. Show more Show less
Posted 3 hours ago
6.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Role Description Role Proficiency: Act creatively to develop applications and select appropriate technical options optimizing application development maintenance and performance by employing design patterns and reusing proven solutions account for others' developmental activities Outcomes Interpret the application/feature/component design to develop the same in accordance with specifications. Code debug test document and communicate product/component/feature development stages. Validate results with user representatives; integrates and commissions the overall solution Select appropriate technical options for development such as reusing improving or reconfiguration of existing components or creating own solutions Optimises efficiency cost and quality. Influence and improve customer satisfaction Set FAST goals for self/team; provide feedback to FAST goals of team members Measures Of Outcomes Adherence to engineering process and standards (coding standards) Adherence to project schedule / timelines Number of technical issues uncovered during the execution of the project Number of defects in the code Number of defects post delivery Number of non compliance issues On time completion of mandatory compliance trainings Code Outputs Expected: Code as per design Follow coding standards templates and checklists Review code – for team and peers Documentation Create/review templates checklists guidelines standards for design/process/development Create/review deliverable documents. Design documentation r and requirements test cases/results Configure Define and govern configuration management plan Ensure compliance from the team Test Review and create unit test cases scenarios and execution Review test plan created by testing team Provide clarifications to the testing team Domain Relevance Advise Software Developers on design and development of features and components with a deep understanding of the business problem being addressed for the client. Learn more about the customer domain identifying opportunities to provide valuable addition to customers Complete relevant domain certifications Manage Project Manage delivery of modules and/or manage user stories Manage Defects Perform defect RCA and mitigation Identify defect trends and take proactive measures to improve quality Estimate Create and provide input for effort estimation for projects Manage Knowledge Consume and contribute to project related documents share point libraries and client universities Review the reusable documents created by the team Release Execute and monitor release process Design Contribute to creation of design (HLD LLD SAD)/architecture for Applications/Features/Business Components/Data Models Interface With Customer Clarify requirements and provide guidance to development team Present design options to customers Conduct product demos Manage Team Set FAST goals and provide feedback Understand aspirations of team members and provide guidance opportunities etc Ensure team is engaged in project Certifications Take relevant domain/technology certification Skill Examples Explain and communicate the design / development to the customer Perform and evaluate test results against product specifications Break down complex problems into logical components Develop user interfaces business software components Use data models Estimate time and effort required for developing / debugging features / components Perform and evaluate test in the customer or target environment Make quick decisions on technical/project related challenges Manage a Team mentor and handle people related issues in team Maintain high motivation levels and positive dynamics in the team. Interface with other teams designers and other parallel practices Set goals for self and team. Provide feedback to team members Create and articulate impactful technical presentations Follow high level of business etiquette in emails and other business communication Drive conference calls with customers addressing customer questions Proactively ask for and offer help Ability to work under pressure determine dependencies risks facilitate planning; handling multiple tasks. Build confidence with customers by meeting the deliverables on time with quality. Estimate time and effort resources required for developing / debugging features / components Make on appropriate utilization of Software / Hardware’s. Strong analytical and problem-solving abilities Knowledge Examples Appropriate software programs / modules Functional and technical designing Programming languages – proficient in multiple skill clusters DBMS Operating Systems and software platforms Software Development Life Cycle Agile – Scrum or Kanban Methods Integrated development environment (IDE) Rapid application development (RAD) Modelling technology and languages Interface definition languages (IDL) Knowledge of customer domain and deep understanding of sub domain where problem is solved Additional Comments Senior Java backend Microservices Software Engineer Musts: Strong understanding of object-oriented and functional programming principles Experience with RESTful APIs Knowledge of microservices architecture and cloud platforms Familiarity with CICD pipelines, Docker, and Kubernetes Strong problem-solving skills and ability to work in an Agile environment Excellent communication and teamwork skills Nices: 6+ years of experience, with at least 3+ in Kotlin Experience with backend development using Kotlin (Ktor, Spring Boot, or Micronaut) Proficiency in working with databases such as PostgreSQL, MySQL, or MongoDB Experience with GraphQL and WebSockets Additional Musts: Experience with backend development in the Java ecosystem (either Java or Kotlin will do) Additional Nices: Experience with Typescript and NodeJS Experience with Kafka Experience with frontend development (e.g. React) Experience with Gradle Experience with GitLab CI Experience with OpenTelemetry Skills Restful Apis,Java,Microservices,Aws Show more Show less
Posted 3 hours ago
4.0 years
0 Lacs
Kerala, India
Remote
About FriskaAi FriskaAi is a powerful AI-enabled, EHR-agnostic platform designed to help healthcare providers adopt an evidence-based approach to care. Our technology addresses up to 80% of chronic diseases, including obesity and type 2 diabetes, enabling better patient outcomes. 📍 Location: Remote 💼 Job Type: Full-Time Job Description We are seeking a highly skilled Backend Developer to join our team. The ideal candidate will have expertise in Python and Django , with experience in SQL and working in a cloud-based environment on Microsoft Azure . You will be responsible for designing, developing, and optimizing backend systems that drive our healthcare platform and ensure seamless data flow and integration. Key Responsibilities Backend Development Develop and maintain scalable backend services using Python and Django. Build and optimize RESTful APIs for seamless integration with frontend and third-party services. Implement efficient data processing and business logic to support platform functionality. Database Management Design and manage database schemas using Azure SQL or PostgreSQL. Write and optimize SQL queries, stored procedures, and functions. Ensure data integrity and security through proper indexing and constraints. API Development & Integration Develop secure and efficient RESTful APIs for frontend and external integrations. Ensure consistent and reliable data exchange between systems. Optimize API performance and scalability. Cloud & Infrastructure Deploy and manage backend applications on Azure App Service and Azure Functions. Set up and maintain CI/CD pipelines using Azure DevOps. Implement monitoring and logging using Azure Application Insights. Microservices Architecture Design and implement microservices to modularize backend components. Ensure smooth communication between services using messaging queues or REST APIs. Optimize microservices for scalability and fault tolerance. Testing & Debugging Write unit and integration tests using Pytest. Debug and resolve production issues quickly and efficiently. Ensure code quality and reliability through regular code reviews. Collaboration & Optimization Work closely with frontend developers, product managers, and stakeholders. Conduct code reviews to maintain high-quality standards. Optimize database queries, API responses, and backend processes for maximum performance. Qualifications Education & Experience 🎓 Bachelor’s degree in Computer Science, Engineering, or a related field (or equivalent experience) 🔹 2–4 years of backend development experience Technical Skills ✔ Proficiency in Python and Django ✔ Strong expertise in SQL (e.g., Azure SQL, PostgreSQL, MySQL) ✔ Experience with RESTful API design and development ✔ Familiarity with microservices architecture ✔ Hands-on experience with Azure services, including: • Azure App Service • Azure Functions • Azure Storage • Azure Key Vault ✔ Experience with CI/CD using Azure DevOps ✔ Proficiency with version control tools like Git ✔ Knowledge of containerization with Docker Soft Skills 🔹 Strong problem-solving skills and attention to detail 🔹 Excellent communication and teamwork abilities 🔹 Ability to thrive in a fast-paced, agile environment Preferred Skills (Nice to Have) ✔ Experience with Kubernetes (AKS) for container orchestration ✔ Knowledge of Redis for caching ✔ Experience with Celery for asynchronous task management ✔ Familiarity with GraphQL for data querying ✔ Understanding of infrastructure as code (IaC) using Terraform or Bicep What We Offer ✅ Competitive salary & benefits package ✅ Opportunity to work on cutting-edge AI-driven solutions ✅ A collaborative and inclusive work environment ✅ Professional development & growth opportunities 🚀 If you’re passionate about backend development and eager to contribute to innovative healthcare solutions, we’d love to hear from you! 🔗 Apply now and be part of our mission to transform healthcare! Show more Show less
Posted 3 hours ago
0.0 years
0 Lacs
Vijay Nagar, Indore, Madhya Pradesh
On-site
Job Title: AWS DevOps Engineer Internship Company: Inventurs Cube LLP Location: Indore, Madhya Pradesh Job Type: Full-time Internship Duration: 1 to 3 months Responsibilities: Assist in the design, implementation, and maintenance of AWS infrastructure using Infrastructure as Code (IaC) principles (e.g., CloudFormation, Terraform). Learn and apply CI/CD (Continuous Integration/Continuous Deployment) pipelines for automated software releases. Support the monitoring and logging of AWS services to ensure optimal performance and availability. Collaborate with development teams to understand application requirements and implement appropriate cloud solutions. Help troubleshoot and resolve infrastructure-related issues. Participate in security best practices implementation and review. Contribute to documentation of cloud architecture, configurations, and processes. Stay updated with the latest AWS services and DevOps trends. What We're Looking For: Currently pursuing a Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. Basic understanding of cloud computing concepts, preferably AWS. Familiarity with at least one scripting language (e.g., Python, Bash). Knowledge of Linux/Unix operating systems. Eagerness to learn and a strong problem-solving aptitude. Excellent communication and teamwork skills. Ability to work independently and take initiative. Bonus Points (Not Mandatory, but a Plus): Prior experience with AWS services (e.g., EC2, S3, VPC, IAM). Basic understanding of version control systems (e.g., Git). Exposure to containerization technologies (e.g., Docker, Kubernetes). Familiarity with CI/CD tools (e.g., Jenkins, GitLab CI, AWS CodePipeline). What You'll Gain: Hands-on experience with industry-leading AWS cloud services and DevOps tools. Mentorship from experienced AWS DevOps engineers. Exposure to real-world projects and agile development methodologies. Opportunity to build a strong foundation for a career in cloud and DevOps. A dynamic and supportive work environment in Indore. Certificate of internship completion. [ Optional: Mention if there's a possibility of full-time employment after successful completion of the internship.] Job Types: Full-time, Fresher, Internship Contract length: 3 months Pay: ₹15,000.00 - ₹20,000.00 per month Schedule: Day shift Work Location: In person Speak with the employer +91 9685458368
Posted 3 hours ago
15.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Alkami is a leading cloud-based digital banking solutions provider for financial institutions in the United States that helps clients to transform through retail and business banking, digital account opening and loan origination, payment fraud prevention, and data analytics and engagement solutions. Founded in 2009, we continue to be recognized for our intentional culture and tremendous growth (Best Place to Work in Fintech; Best & Brightest to Work For Nationally; and Comparably’s Best Company Culture, Best Career Growth, Best Engineering Team, and Best Places to Work in Dallas, among others). Through our bold investments in technology and people, we empower our clients to grow confidently, adapt quickly, and build thriving digital banking communities through tailored experiences for over 20M users. Position Overview: We are seeking a highly experienced and visionary Principal Software Engineer – Backend to join our team. In this strategic role, you will serve as a technical leader and architect , responsible for designing and driving the development of scalable, secure, and high-performing backend systems that power our core platform and services. You will work closely with front-end teams to define clean, efficient APIs and enable seamless integration across client experiences. You will collaborate across teams to define technical direction, influence architecture, and ensure engineering excellence. This role is ideal for a hands-on engineer who enjoys solving complex problems, mentoring others, and shaping the future of large-scale distributed systems. Key Responsibilities and Duties: Lead the design, architecture, and implementation of backend services and APIs that are scalable, maintainable, and secure. Drive the evolution of our technical stack, frameworks, and best practices to meet current and future business needs. Collaborate with cross-functional teams including product, architecture, platform, and infrastructure to deliver aligned and efficient solutions. Influence and contribute to long-term backend and platform strategies. Provide technical mentorship and guidance to other senior and mid-level engineers. Identify and address performance bottlenecks, technical debt, and architecture risks. Champion software quality through code reviews, automated testing, observability, and incident prevention. Contribute to internal documentation, design reviews, and engineering culture. Work closely with frontend and mobile teams to support Server-Driven UI (SDUI) architectures—enabling dynamic, backend-configurable user interfaces that reduce app rebuilds and accelerate time-to-market for UI changes, particularly in mobile environments. Required Qualifications: 15+ years of hands-on software engineering experience with a strong focus on backend development with a Bachelor’s Degree in Computer Science, Engineering, Statistics, Physics, Math, or related field or equivalent work experience Proven expertise designing and building large-scale distributed systems and service-oriented architectures (SOA or microservices). Experience with cloud-native development (AWS, Azure, or GCP) and container orchestration (Docker, Kubernetes). Strong understanding of databases (SQL and NoSQL), message queues, caching strategies, and event-driven architectures. Experience designing and implementing event-driven architectures using technologies like Kafka, Pub/Sub, or similar for scalable, loosely-coupled systems. Familiarity with API design (REST, GraphQL, gRPC), authentication, and authorization best practices. Deep knowledge of backend technologies such as Java, Golang, Node.js, .NET or similar. Proficiency in modern software engineering practices: CI/CD, automated testing, observability, and infrastructure as code. Excellent communication skills and the ability to work effectively across teams and leadership levels. Collaborate effectively with Global Capability Centers (GCCs) and distributed engineering teams across time zones to ensure alignment, knowledge sharing, and delivery consistency. Alkami Technology is an Equal Opportunity Employer and Prohibits Discrimination and Harassment of Any Kind: Alkami is committed to the principle of equal employment opportunity for all employees and to providing employees with a work environment free of discrimination and harassment. All employment decisions at Alkami are based on business needs, job requirements and individual qualifications, without regard to race, color, religion or belief, national, social or ethnic origin, sex (including pregnancy), age, physical, mental or sensory disability, HIV Status, sexual orientation, gender identity and/or expression, marital, civil union or domestic partnership status, past or present military service, family medical history or genetic information, family or parental status, or any other status protected by the laws or regulations in the locations where we operate. Alkami will not tolerate discrimination or harassment based on any of these characteristics. Alkami encourages applicants of all ages. Show more Show less
Posted 3 hours ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
NUVEM Labs is expanding! We’re looking for 2 seasoned professionals to lead a telco cloud deployment project for 5G CNFs on Red Hat OpenShift (Bare Metal) . If you’re ready to work on the frontlines of telecom transformation, this opportunity is for you. What You’ll Be Doing Design and deploy Red Hat OpenShift infrastructure Onboard & validate OEM's CNFs (VCU-AUPF, VCU-ACPF) Prepare and deliver HLD/LLD, as-built docs, test reports & KT sessions Ensure optimized configuration (NUMA, SR-IOV, DPDK, Multus) Lead integration, functional & HA testing Interface with customers and drive handover Skills Required Deep expertise in Kubernetes/OpenShift (must) Hands-on CNF deployment experience (Samsung, Nokia, Ericsson, etc.) Good understanding of 5G Core functions (UPF, PCF, etc.) Familiarity with YAML, Helm, GitOps Excellent communication & documentation skills 🎓 Preferred Certifications Red Hat OpenShift, CKA/CKAD , or Telecom CNF certifications Location: Gurgaon (with project-based travel) Start Date: Immediate joiners preferred Interested? Send your profile to [samalik@nuvemlabs.in] or DM me directly. Join NUVEM Labs and shape the future of cloud-native telecom infrastructure. Show more Show less
Posted 3 hours ago
4.0 years
0 Lacs
Delhi, India
Remote
About Us MyRemoteTeam, Inc is a fast-growing distributed workforce enabler, helping companies scale with top global talent. We empower businesses by providing world-class software engineers, operations support, and infrastructure to help them grow faster and better. Job Title: Golang Developer Experience: 4+ Years Location: Onsite - Delhi Important Point: A Backend Developer, who is proficient in Java/Golang/C++/Python, has at least 2 years of experience in Golang, and is interested in working in Golang. Someone proficient in Docker & Kubernetes Bonus Points- Working knowledge of SQL database for a product at scale and AWS. Key Responsibilities: Minimum 4+ years of work experience in backend development & building scalable products Work on category-creating, possibly disruptive, fintech products in early stages. Design and develop highly scalable and reliable systems end-to-end. Directly work with the CTO and Founders, and be a part of product strategy. Self-Starter; Will be working alongside other engineers and developers, collaborating on the various layers of the infrastructure for our products built in-house. Someone proficient in Docker & Kubernetes Expert knowledge of computer science, with strong competencies in data structures, algorithms, and software design. Familiarity with Agile development, continuous integration, and modern testing methodologies. Familiarity in working with REST APIs Think out of the box while solving problems, considering the scale and changing environment variables. Quickly learn and contribute to changing technical stack ∙ Interest (and/or experience) in the financial/stock market space - interest trumps experience! Show more Show less
Posted 3 hours ago
5.0 years
0 Lacs
Goa, India
On-site
OPTEL. Responsible. Agile. Innovative. OPTEL is a global company that develops transformative software, middleware and hardware solutions to secure and ensure supply chain compliance in major industry sectors such as pharmaceuticals and food, with the goal of reducing the effects of climate change and enabling sustainable living. If you are driven by the desire to contribute to a better world while working in a dynamic and collaborative environment, then you've come to the right place! Full Stack Developer (Javascript + Mobile Dev + .NET) Summary We are seeking a passionate and highly skilled Full Stack Developer to drive the design, development, and optimization of modern, cloud-hosted SaaS applications. You will be responsible for full solution delivery—from architecture to deployment—leveraging technologies such as C#/.NET Core, Node.js, React.js, and cloud platforms like Google Cloud Platform (GCP) and AWS. The ideal candidate embraces a DevSecOps mindset, contributes to AI/ML integrations, and thrives on building secure, scalable, and innovative solutions alongside cross-functional teams. Architecture & System Design Architect and design scalable, secure, and cloud-native applications. Establish technical best practices across frontend, backend, mobile, and cloud components. Contribute to system modernization efforts, advocating for microservices, serverless patterns, and event-driven design. Integrate AI/ML models and services into application architectures. Application Development Design, develop, and maintain robust applications using C#, ASP.NET Core, Node.js, and React.js. Build cross-platform mobile applications with React Native or .NET MAUI. Develop and manage secure RESTful and GraphQL APIs. Utilize Infrastructure as Code (IaC) practices to automate cloud deployments. Cloud Development & DevSecOps Build, deploy, and monitor applications on Google Cloud and AWS platforms. Implement and optimize CI/CD pipelines (GitHub Actions, GitLab, Azure DevOps). Ensure solutions align with security best practices and operational excellence (DevSecOps principles). AI Development and Integration Collaborate with AI/ML teams to design, integrate, and optimize intelligent features. Work with AI APIs and/or custom AI models. Optimize AI workloads for scalability, performance, and cloud-native deployment. Testing, Automation, and Monitoring Create unit, integration, and E2E tests to maintain high code quality. Implement proactive measures to reduce technical debt. Deploy monitoring and observability solutions. Agile Collaboration Work in Agile/Scrum teams, participating in daily standups, sprint planning, and retrospectives. Collaborate closely with product managers, UX/UI designers, and QA engineers. Share knowledge and actively contribute to a strong, collaborative engineering culture. Skills And Qualifications Required 5+ years experience in Full Stack Development (C#, .NET Core, Node.js, JavaScript/TypeScript). Solid frontend development skills with React.js (Vue.js exposure is a plus). Experience with multi-platform mobile app development (React Native or .NET MAUI). Expertise with Google Cloud Platform (GCP) and/or AWS cloud services. Hands-on experience developing and consuming RESTful and GraphQL APIs. Strong DevOps experience (CI/CD, Infrastructure as Code, GitOps practices). Practical experience integrating AI/ML APIs or custom models into applications. Solid relational and cloud-native database skills (Postgres, BigQuery, DynamoDB). Serverless development (Cloud Functions, AWS Lambda). Kubernetes orchestration (GKE, EKS) and containerization (Docker). Event streaming systems (Kafka, Pub/Sub, RabbitMQ). AI/ML workflow deployment (Vertex AI Pipelines, SageMaker Pipelines). Edge Computing (Cloudflare Workers, Lambda@Edge). Experience with ISO/SOC2/GDPR/HIPAA compliance environments. Familiarity with App Store and Google Play Store deployment processes. EQUAL OPPORTUNITY EMPLOYER OPTEL is an equal opportunity employer. We believe that diversity is essential for fostering innovation and creativity. We welcome and encourage applications from individuals of all backgrounds, cultures, gender identities, sexual orientations, abilities, ages, and beliefs. We are committed to providing a fair and inclusive recruitment process, where each candidate is evaluated solely on their qualifications, skills, and potential. At OPTEL, every employee's unique perspective contributes to our collective success, and we celebrate the richness that diversity brings to our team. See the offer on Jazzhr Show more Show less
Posted 4 hours ago
3.0 years
0 Lacs
India
Remote
About the Role At Ceryneian, we’re building a next-generation, research-driven algorithmic trading platform aimed at democratizing access to hedge fund-grade financial analytics. Headquartered in California, Ceryneian is a fintech innovation company dedicated to empowering traders with sophisticated yet accessible tools for quantitative research, strategy development, and execution. Our flagship platform is currently under development. As our DevOps Engineer , you will bridge our backend systems (strategy engine, broker APIs) and frontend applications (analytics dashboards, client portals). You will own the design and execution of scalable infrastructure, CI/CD automation, and system observability in a high-frequency, multi-tenant trading environment. This role is central to deploying our containerized strategy engine (Lean-based), while ensuring data integrity, latency optimization, and cost-efficient scalability. We are a remote-first team and are open to hiring exceptional candidates globally. Key Responsibilities Design secure, scalable environments for containerized, multi-tenant API services and user-isolated strategy runners. Implement low-latency cloud infrastructure across development, staging, and production environments. Automate the CI/CD lifecycle, from pipeline design to versioned production deployment (GitHub Actions, GitLab CI, etc.). Manage Dockerized containers and orchestrate deployment with Kubernetes, ECS, or similar systems. Collaborate with backend and frontend teams to define infrastructure and deployment workflows. Optimize and monitor high-throughput data pipelines for strategy engines using tools like ClickHouse. Integrate observability stacks: Prometheus, Grafana, ELK, or Datadog for logs, metrics, and alerts. Support automated rollbacks, canary releases, and resilient deployment practices. Automate infrastructure provisioning using Terraform or Ansible (Infrastructure as Code). Ensure system security, audit readiness (SOC2, GDPR, SEBI), and comprehensive access control logging. Contribute to high-availability architecture and event-driven design for alerting and strategy signals. Technical Competencies Required Cloud: AWS (preferred), GCP, or Azure. Containerization: Proficiency with Docker and orchestration tools (Kubernetes, ECS, etc.). CI/CD: Experience with YAML-based pipelines using GitHub Actions, GitLab CI/CD, or similar tools. Data Systems: Familiarity with PostgreSQL, MongoDB, ClickHouse, or Supabase. Monitoring: Setup and scaling of observability tools like Prometheus, ELK Stack, or Datadog. Distributed Systems: Strong understanding of scalable microservices, caching, and message queues. Event-Driven Architecture: Experience with Kafka, Redis Streams, or AWS SNS/SQS (preferred). Cost Optimization: Ability to build cold-start strategy runners and enable cloud auto-scaling. 0–3 years of experience. Nice-to-Haves Experience with real-time or high-frequency trading systems. Familiarity with broker integrations and exchange APIs (e.g., Zerodha, Dhan). Understanding of IAM, role-based access control systems, and multi-region deployments. Educational background from Tier-I or Tier-II institutions with strong CS fundamentals, passion for scalable infrastructure, and a drive to build cutting-edge fintech systems. What We Offer Opportunity to shape the core DevOps and infrastructure for a next-generation fintech product. Exposure to real-time strategy execution, backtesting systems, and quantitative modeling. Competitive compensation with performance-based bonuses. Remote-friendly culture with async-first communication. Collaboration with a world-class team from Pomona, UCLA, Harvey Mudd, and Claremont McKenna. Show more Show less
Posted 4 hours ago
3.0 years
0 Lacs
Kochi, Kerala, India
On-site
Overview: The Senior Python Developer plays a crucial role in our organization by creating robust and scalable software solutions. This position is critical for bridging the gap between software design and functional implementation, ensuring that the company’s technology meets the evolving needs of our users. As a senior team member, you will utilize your expertise in Python to develop high-performance applications, enhance existing systems, and contribute to all phases of the software development lifecycle. Your insights into best coding practices will help mentor junior developers, fostering a culture of continuous improvement and innovation within the team. The ideal candidate will be a proactive problem-solver with strong analytical abilities and excellent communication skills, capable of working both independently and as part of a collaborative team. This role not only offers the opportunity to refine your technical skills but also the chance to contribute significantly to the success of our products and services. Key Responsibilities Design, develop, and maintain high-quality software using Python. Implement efficient algorithms and data structures for application performance. Create RESTful APIs and integrate third-party APIs. Collaborate with cross-functional teams to define, design, and ship new features. Debug and resolve software defects and performance issues. Write clean, scalable, and well-documented code. Conduct code reviews and provide constructive feedback. Develop and execute unit tests to ensure code quality and reliability. Mentor junior developers and assist with technical challenges. Monitor and maintain application performance and security. Research and implement new technologies to optimize existing solutions. Create technical documentation and user guides. Adhere to coding standards and best practices throughout the development process. Stay updated with industry trends and emerging technologies. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Software Engineering, or a related field. 3+ years of professional experience in Python development. Experience with web frameworks such as Django, Flask, or FastAPI. Strong understanding of RESTful architecture and API design principles. Proficient in database technologies like PostgreSQL, MySQL. Experience with version control systems, preferably Git. Knowledge of cloud platforms such as AWS,. Familiarity with containerization technologies like Docker and Kubernetes. Excellent analytical and troubleshooting skills. Strong communication skills and ability to work in a team-oriented environment. Demonstrated ability to manage multiple projects and deadlines. Ability to write efficient and reusable code. Understanding of software security principles and practices. Skills: api development,data analysis,database management,version control,problem solving,team collaboration,cloud services,unit testing,python Show more Show less
Posted 4 hours ago
5.0 years
0 Lacs
Indore, Madhya Pradesh, India
On-site
YASH Technologies is a leading technology integrator specializing in helping clients reimagine operating models, enhance competitiveness, optimize costs, foster exceptional stakeholder experiences, and drive business transformation. At YASH, we’re a cluster of the brightest stars working with cutting-edge technologies. Our purpose is anchored in a single truth – bringing real positive changes in an increasingly virtual world and it drives us beyond generational gaps and disruptions of the future. We are looking forward to hire .NET C# Professionals in the following areas : We are looking for a highly skilled and experienced Senior Software Engineer with expertise in .NET (C#) and React.js to join our development team. You will be responsible for designing, developing, and maintaining scalable web applications that deliver exceptional user experiences. Key Responsibilities Design and develop scalable, secure, and high-performance web applications using .NET Core and React.js Participate in the full software development lifecycle – from requirements gathering to production support Develop RESTful APIs and integrate front-end components with back-end services Ensure code quality through unit testing, code reviews, and adherence to best practices Collaborate with cross-functional teams including Product Managers, Designers, and QA Mentor junior developers and contribute to a culture of continuous improvement Troubleshoot, debug, and optimize performance across the application stack Required Qualifications Bachelor’s/Master’s degree in Computer Science, Engineering, or a related field 5+ years of professional software development experience Strong hands-on experience with C#, .NET Core, and ASP.NET Web API Proficient in React.js, JavaScript (ES6+), HTML5, and CSS3 Experience with relational databases such as SQL Server or PostgreSQL Familiarity with front-end build tools like Webpack, Babel, or similar Experience with version control systems (e.g., Git) Preferred Qualifications Knowledge of Microservices architecture Experience with Docker, Kubernetes, or cloud platforms (Azure/AWS) Familiarity with CI/CD pipelines and DevOps practices Exposure to Agile/Scrum methodologies Strong problem-solving skills and attention to detail At YASH, you are empowered to create a career that will take you to where you want to go while working in an inclusive team environment. We leverage career-oriented skilling models and optimize our collective intelligence aided with technology for continuous learning, unlearning, and relearning at a rapid pace and scale. Our Hyperlearning workplace is grounded upon four principles Flexible work arrangements, Free spirit, and emotional positivity Agile self-determination, trust, transparency, and open collaboration All Support needed for the realization of business goals, Stable employment with a great atmosphere and ethical corporate culture Show more Show less
Posted 4 hours ago
3.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Veeam, the #1 global market leader in data resilience, believes businesses should control all their data whenever and wherever they need it. Veeam provides data resilience through data backup, data recovery, data portability, data security, and data intelligence. Based in Seattle, Veeam protects over 550,000 customers worldwide who trust Veeam to keep their businesses running. We’re looking for a Platform Engineer to join the Veeam Data Cloud. The mission of the Platform Engineering team is to provide a secure, reliable, and easy to use platform to enable our teams to build, test, deploy, and monitor the VDC product. This is an excellent opportunity for someone with cloud infrastructure and software development experience to build the world’s most successful, modern, data protection platform. Your tasks will include: Write and maintain code to automate our public cloud infrastructure, software delivery pipeline, other enablement tools, and internally consumed platform services Document system design, configurations, processes, and decisions to support our async, distributed team culture Collaborate with a team of remote engineers to build the VDC platform Work with a modern technology stack based on containers, serverless infrastructure, public cloud services, and other cutting-edge technologies in the SaaS domain On-call rotation for product operations Technologies we work with: Kubernetes, Azure AKS, AWS EKS, Helm, Docker, Terraform, Golang, Bash, Git, etc. What we expect from you: 3+ years of experience in production operations for a SaaS (Software as a Service) or cloud service provider Experience automating infrastructure through code using technologies such as Pulumi or Terraform Experience with GitHub Actions Experience with a breadth and depth of public cloud services Experience building and supporting enterprise SaaS products Understanding of the principles of operational excellence in a SaaS environment. Possessing scripting skills in languages like Bash or Python Understanding and experience implementing secure design principles in the cloud Demonstrated ability to learn new technologies quickly and implement those technologies in a pragmatic manner A strong bias toward action and direct, frequent communication A university degree in a technical field Will be an advantage: Experience with Azure Experience with high-level programming languages such as Go, Java, C/C++, etc. We offer: Family Medical Insurance Annual flexible spending allowance for health and well-being Life insurance Personal accident insurance Employee Assistance Program A comprehensive leave package, including parental leave Meal Benefit Pass Transportation Allowance Monthly Daycare Allowance Veeam Care Days – additional 24 hours for your volunteering activities Professional training and education, including courses and workshops, internal meetups, and unlimited access to our online learning platforms (Percipio, Athena, O’Reilly) and mentoring through our MentorLab program Please note: If the applicant is permanently located outside India, Veeam reserves the right to decline the application. #Hybrid Veeam Software is an equal opportunity employer and does not tolerate discrimination in any form on the basis of race, color, religion, gender, age, national origin, citizenship, disability, veteran status or any other classification protected by federal, state or local law. All your information will be kept confidential. Please note that any personal data collected from you during the recruitment process will be processed in accordance with our Recruiting Privacy Notice. The Privacy Notice sets out the basis on which the personal data collected from you, or that you provide to us, will be processed by us in connection with our recruitment processes. By applying for this position, you consent to the processing of your personal data in accordance with our Recruiting Privacy Notice. Show more Show less
Posted 4 hours ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Key Requirements 8+ years of core Java development experience (Java 11+) Strong in Spring Boot, Spring Framework, and REST APIs Proficient with JPA, Hibernate, MS-SQL, and PostgreSQL Hands-on with microservices, event-driven architecture, and distributed systems CI/CD experience with tools like Jenkins, GitLab CI, GitHub Actions Cloud exposure (preferably AWS; Azure/GCP also acceptable) Front-end experience with React or Angular, plus HTML, CSS3/Tailwind Familiarity with Kubernetes (EKS, AKS, GKE) Experience in DDD, BFF, and integrating with cloud services Strong problem-solving skills, Git proficiency, and Agile practices AI Integration Responsibilities Use tools like GitHub Copilot, OpenAI Codex, Gemini for code generation, testing, and documentation Evaluate and adopt emerging AI technologies Collaborate with AI to optimize and refactor code Nice To Have Hospitality domain experience Willingness to adopt and learn AI-assisted coding tools Understanding of integrating AI tools in the SDLC Skills: ai integration,aws,ms-sql,css3,cloud,gcp,rest apis,api,react,bff,spring boot,java 11+,postgresql,agile practices,core java,jpa,gitlab ci,event-driven architecture,aks,jenkins,ddd,hibernate,ci/cd,github actions,eks,html,kubernetes,gke,azure,springboot,java,microservices,spring framework,distributed systems,angular,git proficiency,tailwind Show more Show less
Posted 4 hours ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a GCP Dev Ops Engr to join our team in Ban/Hyd/Chn/Gur/Noida, Karnātaka (IN-KA), India (IN). Responsibilities Design, implement, and manage GCP infrastructure using Infrastructure as Code (IaC) tools. Develop and maintain CI/CD pipelines to improve development workflows. Monitor system performance and ensure high availability of cloud resources. Collaborate with development teams to streamline application deployments. Maintain security best practices and compliance across the cloud environment. Automate repetitive tasks to enhance operational efficiency. Troubleshoot and resolve infrastructure-related issues in a timely manner. Document procedures, policies, and configurations for the infrastructure. Skills Google Cloud Platform (GCP) Terraform Ansible CI/CD Kubernetes Docker Python Bash/Shell Scripting Monitoring tools (e.g., Prometheus, Grafana) Cloud Security Jenkins Git About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less
Posted 4 hours ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Req ID: 327296 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a GCP Solution Architect to join our team in Noida, Uttar Pradesh (IN-UP), India (IN). Job Description: Primary Skill: Cloud-Infrastructure-Google Cloud Platform Minimum work experience: 8+ yrs Total Experience: 8+ Years Must have GCP Solution Architect Certification& GKE Mandatory Skills: Technical Qualification/ Knowledge: Expertise in assessment, designing and implementing GCP solutions including aspects like compute, network, storage, identity, security , DR/business continuity strategy, migration , templates , cost optimization, PowerShell ,Terraforms, Ansible etc.. Must have GCP Solution Architect Certification Should have prior experience in executing large complex cloud transformation programs including discovery, assessment , business case creation , design , build , migration planning and migration execution Should have prior experience in using industry leading or native discovery, assessment and migration tools Good knowledge on the cloud technology, different patterns, deployment methods, compatibility of the applications Good knowledge on the GCP technologies and associated components and variations Anthos Application Platform Compute Engine , Compute Engine Managed Instance Groups , Kubernetes Cloud Storage , Cloud Storage for Firebase , Persistant Disk , Local SSD , Filestore , Transfer Service Virtual Private Network (VPC), Cloud DNS , Cloud Interconnect , Cloud VPN Gateway , Network Load Balancing , Global load balancing , Firewall rules , Cloud Armor Cloud IAM , Resource Manager , Multi-factor Authentication , Cloud KMS Cloud Billing , Cloud Console , Stackdriver Cloud SQL, Cloud Spanner SQL, Cloud Bigtable Cloud Run Container services, Kubernetes Engine (GKE) , Anthos Service Mesh , Cloud Functions , PowerShell on GCP Solid understanding and experience in cloud computing based services architecture, technical design and implementations including IaaS, PaaS, and SaaS. Design of clients Cloud environments with a focus on mainly on GCP and demonstrate Technical Cloud Architectural knowledge. Playing a vital role in the design of production, staging, QA and development Cloud Infrastructures running in 24x7 environments. Delivery of customer Cloud Strategies, aligned with customers business objectives and with a focus on Cloud Migrations and DR strategies Nurture Cloud computing expertise internally and externally to drive Cloud Adoption Should have a deep understanding of IaaS and PaaS services offered on cloud platforms and understand how to use them together to build complex solutions. Ensure that all cloud solutions follow security and compliance controls, including data sovereignty. Deliver cloud platform architecture documents detailing the vision for how GCP infrastructure and platform services support the overall application architecture, interaction with application, database and testing teams for providing a holistic view to the customer. Collaborate with application architects and DevOps to modernize infrastructure as a service (IaaS) applications to Platform as a Service (PaaS) Create solutions that support a DevOps approach for delivery and operations of services Interact with and advise business representatives of the application regarding functional and non-functional requirements Create proof-of-concepts to demonstrate viability of solutions under consideration Develop enterprise level conceptual solutions and sponsor consensus/approval for global applications. Have a working knowledge of other architecture disciplines including application, database, infrastructure, and enterprise architecture. Identify and implement best practices, tools and standards Provide consultative support to the DevOps team for production incidents Drive and support system reliability, availability, scale, and performance activities Evangelizes cloud automation and be a thought leader and expert defining standards for building and maintaining cloud platforms. Knowledgeable about Configuration management such as Chef/Puppet/Ansible. Automation skills using CLI scripting in any language (bash, perl, python, ruby, etc) Ability to develop a robust design to meet customer business requirement with scalability, availability, performance and cost effectiveness using GCP offerings Ability to identify and gather requirements to define an architectural solution which can be successfully built and operate on GCP Ability to conclude high level and low level design for the GCP platform which may also include data center design as necessary Capabilities to provide GCP operations and deployment guidance and best practices throughout the lifecycle of a project Understanding the significance of the different metrics for monitoring, their threshold values and should be able to take necessary corrective measures based on the thresholds Knowledge on automation to reduce the number of incidents or the repetitive incidents are preferred Good knowledge on the cloud center operation, monitoring tools, backup solution GKE Set up monitoring and logging to troubleshoot a cluster, or debug a containerized application. Manage Kubernetes Objects Declarative and imperative paradigms for interacting with the Kubernetes API. Managing Secrets Managing confidential settings data using Secrets. Configure load balancing, port forwarding, or setup firewall or DNS configurations to access applications in a cluster. Configure networking for your cluster. Hands-on experience with terraform. Ability to write reusable terraform modules. Hands-on Python and Unix shell scripting is required. understanding of CI/CD Pipelines in a globally distributed environment using Git, Artifactory, Jenkins, Docker registry. Experience with GCP Services and writing cloud functions. Hands-on experience deploying and managing Kubernetes infrastructure with Terraform Enterprise. Ability to write reusable terraform modules. Certified Kubernetes Administrator (CKA) and/or Certified Kubernetes Application Developer (CKAD) is a plus Experience using Docker within container orchestration platforms such as GKE. Knowledge of setting up splunk Knowledge of Spark in GKE Certification: GCP solution architect & GKE Process/ Quality Knowledge: Must have clear knowledge on ITIL based service delivery ITIL certification is desired Knowledge on quality Knowledge on security processes About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less
Posted 4 hours ago
8.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Introduction: EkVayu Tech is a fast growing, research focused, technology company specializing in developing IT and AI applications. Our projects span modern front-end development, robust backend systems, cloud-native and on-prem infrastructure, AI/ML enablement, and automated testing pipelines. We are looking for a visionary technical leader to guide our engineering team and architecture strategy as we scale. We are having products in the area of Cybersecurity/ AI/ML/DL, Signal Processing, System Engineering and Health-Tech. Job Title: Tech Architect / VP of Engineering / Tech Lead, Experience Level: Senior / Leadership Location: Noida Sector 62, UP, India Role Overview As a Tech Architect / Engineering VP / Tech Lead, you will be responsible for driving the overall engineering strategy, leading architecture and design decisions, managing development teams, and ensuring scalable, high-performance delivery of products. You’ll work closely with founders, product teams, and clients to define and deliver cutting-edge solutions that leverage AI and full-stack technologies. Key Responsibilities Architectural Leadership: o Design and evolve scalable, secure, and performant architecture across front-end, backend, and AI services. o Guide tech stack choices, frameworks, and tools aligned with business goals. o Lead cloud/on-prem infrastructure decisions, including CI/CD, containerization, and DevOps automation. Engineering Management: o Build and mentor a high-performing engineering team. o Define engineering best practices, coding standards, and technical workflows. o Own technical delivery timelines and code quality benchmarks. Hands-on Development & Technical Oversight: o Contribute to critical system components and set examples in code quality and documentation. o Oversee implementation of RESTful APIs, microservices, AI modules, and integration plugins. o Champion test-driven development and automated QA processes. AI Enablement: o Guide development of AI-enabled features, data pipelines, and model integration (working with MLOps/data teams). o Drive adoption of tools that enhance AI-assisted development and intelligent systems. Infrastructure & Deployment: o Architect hybrid environments across cloud and on-prem setups. o Optimize deployment pipelines using tools like Docker, Kubernetes, GitHub Actions, or similar. o Implement observability solutions for performance monitoring and issue resolution. Required Skills & Experience 8+ years of experience in software engineering, with 3+ years in a leadership/architect role. Strong proficiency in: o Frontend: React.js, Next.js o Backend: Python, Django, FastAPI o AI/ML Integration: Working knowledge of ML model serving, APIs, or pipelines Experience building and scaling systems in hybrid (cloud/on-prem) environments. Hands-on with CI/CD, testing automation, and modern DevOps workflows. Experience with plugin-based architectures and extensible systems. Deep understanding of security, scalability, and performance optimization. Ability to translate business needs into tech solutions and communicate across stakeholders. Preferred (Nice to Have) Experience with OpenAI API, LangChain, or custom AI tooling environments. Familiarity with infrastructure-as-code (Terraform, Ansible). Background in SaaS product development or AI-enabled platforms. Knowledge of container orchestration (Kubernetes) and microservice deployments. What We Offer Competitive compensation Opportunity to shape core technology in a fast-growing company Exposure to cutting-edge AI applications and infrastructure challenges Collaborative and open-minded team culture How to Apply Send your resume, portfolio (if applicable), and a brief note on why you’re excited to join us to HR@EkVayu.com Show more Show less
Posted 4 hours ago
0.0 - 1.0 years
0 Lacs
Indore, Madhya Pradesh
On-site
Responsibilities: Develop and maintain infrastructure as code (IaC) to support scalable and secure infrastructure. Collaborate with the development team to streamline and optimize the continuous integration and deployment pipeline. Manage and administer Linux systems, ensuring reliability and security. Configure and provision cloud resources on AWS, Google Cloud, or Azure as required. Implement and maintain containerized environments using Docker and orchestration with Kubernetes. Monitor system performance and troubleshoot issues to ensure optimal application uptime. Stay updated with industry best practices, tools, and DevOps methodologies. Enhance software development processes through automation and continuous improvement initiatives. Requirements: Degree(s): B.Tech/BE (CS, IT, EC, EI) or MCA. Eligibility: Open to 2021, 2022, and 2023 graduates and postgraduates only. Expertise in Infrastructure as Code (IaC) with tools like Terraform and CloudFormation. Proficiency in software development using languages such as Python, Bash, and Go. Experience in Continuous Integration with tools such as Jenkins, Travis CI, and CircleCI. Strong Linux system administration skills. Experience in provisioning, configuring, and managing cloud resources (AWS, Google Cloud Platform, or Azure). Excellent verbal and written communication skills. Experience with containerization and orchestration tools such as Docker and Kubernetes. Job Type: Full-time Pay: ₹45,509.47 - ₹85,958.92 per month Benefits: Health insurance Schedule: Day shift Ability to commute/relocate: Indore, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Education: Bachelor's (Preferred) Experience: Python: 1 year (Preferred) AI/ML: 1 year (Preferred) Location: Indore, Madhya Pradesh (Preferred) Work Location: In person
Posted 4 hours ago
15.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Title: VP-Digital Expert Support Lead Experience : 15 + Years Location : Pune Position Overview The Digital Expert Support Lead is a senior-level leadership role responsible for ensuring the resilience, scalability, and enterprise-grade supportability of AI-powered expert systems deployed across key domains like Wholesale Banking, Customer Onboarding, Payments, and Cash Management . This role requires technical depth, process rigor, stakeholder fluency , and the ability to lead cross-functional squads that ensure seamless operational performance of GenAI and digital expert agents in production environments. The candidate will work closely with Engineering, Product, AI/ML, SRE, DevOps, and Compliance teams to drive operational excellence and shape the next generation of support standards for AI-driven enterprise systems. Role-Level Expectations Functionally accountable for all post-deployment support and performance assurance of digital expert systems. Operates at L3+ support level , enabling L1/L2 teams through proactive observability, automation, and runbook design. Leads stability engineering squads , AI support specialists, and DevOps collaborators across multiple business units. Acts as the bridge between operations and engineering , ensuring technical fixes feed into product backlog effectively. Supports continuous improvement through incident intelligence, root cause reporting, and architecture hardening . Sets the support governance framework (SLAs/OLAs, monitoring KPIs, downtime classification, recovery playbooks). Position Responsibilities Operational Leadership & Stability Engineering Own the production health and lifecycle support of all digital expert systems across onboarding, payments, and cash management. Build and govern the AI Support Control Center to track usage patterns, failure alerts, and escalation workflows. Define and enforce SLAs/OLAs for LLMs, GenAI endpoints, NLP components, and associated microservices. Establish and maintain observability stacks (Grafana, ELK, Prometheus, Datadog) integrated with model behavior. Lead major incident response and drive cross-functional war rooms for critical recovery. Ensure AI pipeline resilience through fallback logic, circuit breakers, and context caching. Review and fine-tune inference flows, timeout parameters, latency thresholds, and token usage limits. Engineering Collaboration & Enhancements Drive code-level hotfixes or patches in coordination with Dev, QA, and Cloud Ops. Implement automation scripts for diagnosis, log capture, reprocessing, and health validation. Maintain well-structured GitOps pipelines for support-related patches, rollback plans, and enhancement sprints. Coordinate enhancement requests based on operational analytics and feedback loops. Champion enterprise integration and alignment with Core Banking, ERP, H2H, and transaction processing systems. Governance, Planning & People Leadership Build and mentor a high-caliber AI Support Squad – support engineers, SREs, and automation leads. Define and publish support KPIs , operational dashboards, and quarterly stability scorecards. Present production health reports to business, engineering, and executive leadership. Define runbooks, response playbooks, knowledge base entries, and onboarding plans for newer AI support use cases. Manage relationships with AI platform vendors, cloud ops partners, and application owners. Must-Have Skills & Experience 15+ years of software engineering, platform reliability, or AI systems management experience. Proven track record of leading support and platform operations for AI/ML/GenAI-powered systems . Strong experience with cloud-native platforms (Azure/AWS), Kubernetes , and containerized observability . Deep expertise in Python and/or Java for production debugging and script/tooling development. Proficient in monitoring, logging, tracing, and alerts using enterprise tools (Grafana, ELK, Datadog). Familiarity with token economics , prompt tuning, inference throttling, and GenAI usage policies. Experience working with distributed systems, banking APIs, and integration with Core/ERP systems . Strong understanding of incident management frameworks (ITIL) and ability to drive postmortem discipline . Excellent stakeholder management, cross-functional coordination, and communication skills. Demonstrated ability to mentor senior ICs and influence product and platform priorities. Nice-to-Haves Exposure to enterprise AI platforms like OpenAI, Azure OpenAI, Anthropic, or Cohere. Experience supporting multi-tenant AI applications with business-driven SLAs. Hands-on experience integrating with compliance and risk monitoring platforms. Familiarity with automated root cause inference or anomaly detection tooling. Past participation in enterprise architecture councils or platform reliability forums Show more Show less
Posted 5 hours ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Data Processing & Embeddings: Work with text, image, audio, and video data to create embeddings and preprocess inputs for AI models. Prompt Engineering & Optimization: Experiment with prompt engineering techniques to improve AI responses and outputs. API Integration: Utilize OpenAI, Azure OpenAI, Hugging Face, and other APIs to integrate AI models into applications. Model Development & Fine-Tuning: Assist in training, fine-tuning, and deploying Generative AI models (GPT, Llama, Stable Diffusion, Claude etc.). AI Workflows & Pipelines: Contribute to building and optimizing AI workflows using Python, TensorFlow, PyTorch, Langchain or other frameworks. Cloud & Deployment: Deploy AI solutions on cloud platforms like Azure, AWS, or Google Cloud, leveraging serverless architectures and containerization (Docker, Kubernetes). AI Application Development: Collaborate with developers to build AI-powered applications using frameworks like Streamlit, Flask, or FastAPI. Experimentation & Research: Stay updated on the latest advancements in Generative AI and explore new use cases. Show more Show less
Posted 5 hours ago
5.0 years
0 Lacs
Bhubaneswar, Odisha, India
On-site
Java Spring Boot Developer Company: Gravity Engineering Services Pvt. Ltd. (GES) Location: Bhubaneswar (Odisha), Raipur (Chhattisgarh), Patna (Bihar) Gravity - Ease of Working - Company Policy Position: Full Time Experience – 5+ years Email - kauser.fathima@gravityer.com Ph - 9916141516 About Gravity: Candidate Gravity Deck Gravity PPT - June 2024 Gravity Engineering Services is a Digital Transformation and Product Engineering company based in USA, Europe and India, through cutting-edge IT solutions. Our diverse portfolio includes Generative AI, Commerce Technologies, Cloud management, Business Analytics and Marketing technologies. We are on a mission for Building experiences and influencing change through delivering digital consulting services that drive innovation, efficiency, and growth for businesses globally, with a vision to be the world's most valued technology company, driving innovation, and making a positive impact on the world. Our goal is to achieve unicorn status (valuation of $1 billion) by 2030. Job Description: - Lead Ecommerce Solution Design and Development: Spearhead the design and development of scalable, secure, and high-performance solutions for our ecommerce platform using Java Spring Boot Collaborate with Cross-Functional Teams: Work closely with product managers, UI/UX designers, and quality assurance teams to gather requirements and deliver exceptional ecommerce solutions. Architectural Oversight: Provide architectural oversight for ecommerce projects, ensuring they are scalable, maintainable, and aligned with industry best practices. Technical Leadership: Lead and mentor a team of Java developers, fostering a collaborative and innovative environment. Provide technical guidance and support to team members. Code Review and Quality Assurance: Conduct regular code reviews to maintain code quality, ensuring adherence to Java Spring Boot coding standards and best practices. Implement and promote quality assurance processes within the development lifecycle. Integration of Ecommerce Solutions: Oversee the seamless integration of ecommerce solutions with other business systems, ensuring a cohesive and efficient data flow. Payment Gateway Integration: Collaborate on the integration of payment gateways and other essential ecommerce functionalities. Stay Informed on Ecommerce Technologies: Stay abreast of the latest developments in ecommerce technologies, incorporating new features and improvements based on emerging trends Engage with clients to understand their ecommerce requirements, provide technical insights, and ensure the successful implementation of solutions. Desired Skills Bachelor’s degree in Computer Science, Information Technology, or a related field. 2+ years of software development experience, with strong focus on Java Spring Boot and e-commerce platforms. Hands-on experience with Java 8 or above, with deep understanding of core Java concepts and modern Java features. Proficient in Spring ecosystem including Spring Boot, Spring MVC, Spring Data JPA, Spring Security, Spring AOP, and Spring Cloud (Config, Discovery, etc.). Strong understanding of microservices architecture, including REST API design and inter-service communication using REST, Kafka, or RabbitMQ. Practical experience in containerization using Docker and orchestration with Kubernetes (mandatory). Experience integrating payment gateways, order management, and inventory systems within e-commerce platforms. Hands-on with relational databases (MySQL, PostgreSQL) and NoSQL databases (MongoDB, Redis) – at least one from each category. Familiarity with in-memory caching solutions like Redis or Hazel cast. Good understanding of database performance tuning techniques including indexing and query optimization. Solid grasp of data structures and algorithms, with the ability to apply them in solving real-world problems. Excellent problem-solving and debugging skills, with the ability to work on complex technical challenges. Skills: mongodb,algorithms,mysql,core java,spring boot,java,kafka,kubernetes,spring,spring data jpa,redis,e-commerce platforms,rabbitmq,data structures,docker,spring security,spring mvc,postgresql,rest api design,microservices architecture,spring cloud,java 8 or above,boot,ecommerce,spring aop Show more Show less
Posted 5 hours ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
QA Engineer – Integration Platforms Experience: 7+ years About the Role At Technoidentity, we are seeking a detail-oriented and experienced QA Engineer who is strong in both manual testing and automation, with a keen understanding of system integrations. This role demands a quality-first mindset and practical exposure to validating end-to-end integration scenarios across multiple platforms and services. You will collaborate with cross-functional teams including product, engineering, and DevOps, and play a key role in ensuring seamless system interoperability through robust testing processes. What You’ll Do Own and execute manual and automated testing for new features, regression cycles, and integration workflows. Design and maintain test cases, test plans, and perform exploratory testing for complex, cross-service integrations. Validate API-level and user-facing functionality, ensuring complete coverage for end-to-end business scenarios. Document and track defects with clear steps and business impact. Work with automation engineers to include relevant manual test cases in automation suites. Build and maintain automation scripts using frameworks like Cypress, Playwright, or similar. Ensure seamless integration of test cases into CI/CD pipelines. Participate in design discussions and proactively advocate for quality at all stages of the SDLC. Required Skills & Experience 7+ years of experience in Software QA with a strong emphasis on both manual and automated testing. Proven expertise in testing integrated systems, especially across platforms like Mulesoft, Tray.io, Oracle Integration Cloud (OIC), Dell Boomi, Salesforce, etc. Hands-on automation experience with integration-focused test scripts and tools. Solid understanding of API testing, cross-service workflows, and validation techniques. Familiarity with at least one scripting/programming language: JavaScript, Python, Java, etc. Experience with test management tools such as TestRail, Zephyr, or XRAY. Practical experience with CI/CD tools (e.g., Jenkins, GitHub Actions, Git) and version control systems. Excellent problem-solving skills and attention to detail, especially in edge-case discovery and impact analysis. Strong communication and interpersonal skills to work across agile, fast-paced teams. Nice to Have Proficiency in tools like Postman, REST Assured for API testing. Working knowledge of SQL for backend validations. Exposure to agile methodologies and tools like JIRA, Confluence, etc. Experience working in containerized environments (Docker, Kubernetes). Education Bachelor's degree in Computer Science, Engineering, or a related discipline, or equivalent practical experience. Show more Show less
Posted 5 hours ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2