Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 8.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
The primary responsibility of the Data Science & Analysis role in India is to design, train, and fine-tune advanced foundational models (text, audio, vision) using healthcare and other relevant datasets, with a key focus on accuracy and context relevance. Collaboration with cross-functional teams (Business, engineering, IT) is essential to seamlessly integrate AI/ML technologies into solution offerings. Deployment, monitoring, and management of AI models in a production environment are crucial to ensure high availability, scalability, and performance. Continuous research and evaluation of the latest advancements in AI/ML and industry trends are required to drive innovation. Comprehensive documentation for AI models, covering development, training, fine-tuning, and deployment procedures, needs to be developed and maintained. Providing technical guidance and mentorship to junior AI engineers and team members is also a part of the role. Collaboration with stakeholders to understand business needs and translate them into technical requirements for model fine-tuning and development is critical. Selecting and curating appropriate datasets for fine-tuning foundational models to address specific use cases is an essential aspect of the job. Ensuring that AI solutions can seamlessly integrate with existing systems and applications is also part of the responsibilities. For this role, a Bachelor's or Master's degree in computer science, Artificial Intelligence, Machine Learning, or a related field is required. The ideal candidate should have 4 to 6 years of hands-on experience in AI/ML, with a proven track record of training and deploying LLMs and other machine learning models. Strong proficiency in Python and familiarity with popular AI/ML frameworks such as TensorFlow, PyTorch, Hugging Face Transformers, etc., is necessary. Practical experience in deploying and managing AI models in production environments, including expertise in serving and inference frameworks like Triton, TensorRT, VLLM, TGI, etc., is expected. Experience in Voice AI applications, a solid understanding of healthcare data standards (FHIR, HL7, EDI), and regulatory compliance (HIPAA, SOC2) is preferred. Excellent problem-solving and analytical abilities are required to tackle complex challenges and evaluate multiple factors. Exceptional communication and collaboration skills are necessary for effective teamwork in a dynamic environment. The ideal candidate should have worked on a minimum of 2 AI/LLM projects from start to finish, demonstrating proven business value. Experience with cloud computing platforms (AWS, Azure) and containerization technologies (Docker, Kubernetes) is a plus. Familiarity with MLOps practices for continuous integration, continuous deployment (CI/CD), and automated monitoring of AI models would also be advantageous. Guidehouse offers a comprehensive total rewards package, including competitive compensation and a flexible benefits package that reflects the commitment to creating a diverse and supportive workplace. Guidehouse is an Equal Opportunity Employer that considers qualified applicants with criminal histories in accordance with applicable laws and regulations, including the Fair Chance Ordinance of Los Angeles and San Francisco. If accommodation is required to apply for a position or for information about employment opportunities, applicants can contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information provided will be kept confidential and used only as needed to provide necessary reasonable accommodation. All recruitment communication from Guidehouse will be sent from Guidehouse email domains, including @guidehouse.com or guidehouse@myworkday.com. Any correspondence received from other domains should be considered unauthorized and will not be honored by Guidehouse. Guidehouse does not charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in recruitment events. Banking information should never be provided to a third party claiming to need it for the hiring process. If any demand for money related to a job opportunity with Guidehouse arises, it should be reported to Guidehouse's Ethics Hotline. For verification of received correspondence, applicants can contact recruiting@guidehouse.com. Guidehouse is not liable for any losses incurred from an applicant's dealings with unauthorized third parties.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
Ignite your passion for product innovation by leading customer-centric development, inspiring solutions, and shaping the future with your strategic vision and influence. As a Regional Agent Specialist reporting under the Global Agent Specialist within the firmwide CDAO Organization, you will collaborate with regional product managers and client teams to support Agentic development. Your role involves providing expert guidance and empowering clients to implement Agents and create business value. With strong communication and stakeholder management skills, you will cultivate collaborative relationships with Product, Engineering, and Architecture teams across JPM Lines of Business, Corporate Functions, and Internal Fusion Product and Engineering Teams to drive priority outcomes. Align your priorities with those of the Global Agent Specialist, establishing effective communication and workflows to regularly update them on progress. Provide technical expertise to empower regional clients to design and implement agent-based solutions. Engage with Global Agent Specialist to understand the client's needs, provide expert guidance to the clients located in the region and ensure successful deployment and integration of agent solutions. Collaborate with regional product and engineering teams to enhance agent capabilities and ensure seamless integration with existing systems and processes. Support Global Agent Specialist to develop best practices for Agentic Systems architecture and integration, including performance monitoring, optimization, and evaluation. Collaborate with AI researchers, ML engineers, and software developers to push the boundaries of Agentic AI. Develop prototypes, blueprints, and demos for rolling out education and training across the region. Provide thought leadership and contribute to technical white papers on Agentic implementations. Required Qualifications, capabilities, and skills: - Experience in multi-agent architectures using frameworks like LangChain, AutoGPT, CrewAI. - Hands-on experience building or using LLM-powered Agentic solutions. - Experience in developing, training, and deploying machine learning models, Gen AI models, including knowledge of model evaluation and optimization techniques. - Experience with cloud computing platforms (e.g., AWS, Azure, or Google Cloud Platform), containerization technologies (e.g., Docker and Kubernetes), and microservices design, implementation, and performance optimization. - Experience in Financial Services or other highly regulated industries. - Strong foundation in mathematics and statistics, including knowledge of linear algebra, calculus, probability, and statistical methods. Preferred Qualifications, capabilities, and skills: - Experience in the financial or banking domain.,
Posted 1 day ago
6.0 - 10.0 years
0 Lacs
haryana
On-site
As a Delivery Manager at LogicLadder, you will have a significant impact on driving the technical vision and innovation behind our sustainability software solutions. Reporting directly to the Head of Engineering, your primary responsibility will be to lead a team of skilled software engineers, guiding them through the design, development, and deployment phases of advanced systems that empower our clients in achieving their net-zero objectives. Your key responsibilities will include mentoring and leading the software engineering team to foster a culture of continuous improvement and technical excellence. You will be instrumental in defining the architecture and design of intricate, scalable systems that form the backbone of LogicLadder's sustainability software offerings. Collaborating closely with cross-functional teams, you will help shape technical roadmaps that align with the organization's strategic goals. In addition to your leadership role, you will actively engage in coding, debugging, and troubleshooting to maintain a profound understanding of the software development lifecycle. Conducting regular code reviews to ensure adherence to best practices and high-quality standards will be part of your routine. Furthermore, you will proactively identify opportunities for process enhancements, automation, and optimization, contributing to the company's overall thought leadership by participating in technical blogs, conferences, and community engagement. To be successful in this role, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with at least 6 years of experience as a software engineer, including 5 years in a leadership capacity. Having a proven track record of delivering complex software projects from inception to production, familiarity with sustainability software, energy management, or environmental monitoring solutions, and exposure to data engineering and data visualization technologies will be advantageous. Additionally, proficiency in multiple programming languages, particularly functional programming like Scala, Haskell, or Clojure, extensive experience in designing and developing scalable, distributed systems, and a deep understanding of software architecture patterns and design principles are essential requirements. Knowledge of cloud computing platforms such as AWS, GCP, or Azure, familiarity with agile software development methodologies, and strong problem-solving and analytical skills are also key qualifications for this role. While not mandatory, experience with real-time data processing and streaming technologies, knowledge of machine learning and predictive analytics techniques, familiarity with IoT and sensor integration, involvement in open-source projects, or contributions to the developer community will be considered advantageous. At LogicLadder, we offer a competitive benefits package that includes Medical Insurance covering employees and their families, personal accidental insurance, a great company culture, exposure to a rapidly growing domain, and gratuity benefits.,
Posted 2 days ago
5.0 - 10.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
The Data Science & Analysis team in India is looking for a skilled professional to join our dynamic team. As a member of our team, you will be responsible for designing, training, and fine-tuning advanced foundational models such as text, audio, and vision using healthcare and other relevant datasets. Your focus will be on accuracy and context relevance to ensure the efficiency of our solutions. Collaboration with cross-functional teams including Business, engineering, and IT is key in seamlessly integrating AI/ML technologies into our solution offerings. You will also play a crucial role in deploying, monitoring, and managing AI models in a production environment, ensuring high availability, scalability, and performance. Staying updated with the latest advancements in AI/ML and industry trends is essential to drive innovation within the team. Adherence to industry standards and regulatory requirements, such as HIPAA, is paramount in developing AI solutions. You will be responsible for developing and maintaining comprehensive documentation for AI models, providing technical guidance and mentorship to junior AI engineers, and collaborating with stakeholders to understand and translate business needs into technical requirements. To be successful in this role, you should hold a Bachelors or Masters degree in computer science, Artificial Intelligence, Machine Learning, or a related field. With a minimum of 10 years of industry experience, including at least 5 years of hands-on experience in AI/ML, you should have a strong proficiency in Python and familiarity with popular AI/ML frameworks such as TensorFlow, PyTorch, and Hugging Face Transformers. Experience in deploying and managing AI models in production environments, expertise in serving and inference frameworks, and practical experience in Voice AI applications are desirable skills. Additionally, familiarity with healthcare data standards, regulatory compliance, cloud computing platforms, and containerization technologies will be beneficial in this role. If you have experience with federated learning, privacy-preserving AI techniques, synthetic data generation for healthcare model training, and knowledge of healthcare, it would be considered a plus. Your ability to evaluate and select GenAI models based on performance, cost, and compliance factors will be highly valued. Guidehouse offers a comprehensive total rewards package including competitive compensation and flexible benefits. We are an Equal Opportunity Employer and encourage individuals with diverse backgrounds to apply. If you require accommodation during the application process, please contact Guidehouse Recruiting for assistance. Join us at Guidehouse and be a part of a team that values innovation, collaboration, and diversity in a supportive workplace environment.,
Posted 2 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Lead/Engineer DevOps at Wabtec Corporation, you will play a crucial role in performing CI/CD and automation design/validation activities. Working under the project responsibility of the Technical Project Manager and the technical responsibility of the software architect, you will be a key member of the WITEC team in Bengaluru. Your main responsibilities will include respecting internal processes, adhering to coding rules, writing documentation in alignment with the implementation, and meeting Quality, Cost, and Time objectives set by the Technical Project Manager. To be successful in this role, you should hold a Bachelor's or Master's degree in engineering in Computer Science with a web option in CS, IT, or a related field. Additionally, you should have 6 to 10 years of hands-on experience as a DevOps Engineer. The ideal candidate will have a good understanding of Linux systems and networking, proficiency in CI/CD tools like GitLab, knowledge of containerization technologies such as Docker, and experience with scripting languages like Bash and Python. Hands-on experience in setting up CI/CD pipelines, configuring Virtual Machines, and familiarity with C/C++ build tools like CMake and Conan is essential. Moreover, expertise in setting up pipelines in GitLab for build, unit testing, and static analysis, along with knowledge of infrastructure as code tools like Terraform or Ansible, will be advantageous. Experience with monitoring and logging tools such as ELK Stack or Prometheus/Grafana is desirable. As a DevOps Engineer, you should possess strong problem-solving skills and the ability to troubleshoot production issues effectively. A passion for continuous learning, staying updated with modern technologies and trends in the DevOps field, and proficiency in project management and workflow tools like Jira, SPIRA, Teams Planner, and Polarion are key attributes for this role. In addition to technical skills, soft skills such as good communication in English, autonomy, interpersonal skills, synthesis skills, and the ability to work well in a team while managing multiple tasks efficiently are highly valued in this position. At Wabtec, we are committed to embracing diversity and inclusion, not just in our products but also in our people. We celebrate the variety of experiences, expertise, and backgrounds that our employees bring, creating an environment where everyone belongs and diversity is welcomed and appreciated. Join us in our mission to drive progress, unlock our customers" potential, and deliver innovative transportation solutions that move and improve the world.,
Posted 2 days ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
As a Full Stack Developer with expertise in Python and Angular, your role at Capco, a global technology and management consulting firm, will involve designing and developing scalable and maintainable applications. With a focus on backend development using Python or Java, you will contribute to the delivery of disruptive work that is transforming the energy and financial services sectors. Your responsibilities will include hands-on experience in Angular development for creating modern, responsive web applications. You will leverage your knowledge of microservices architecture to design, develop, and deploy microservices-based architectures. Additionally, your understanding of DevOps practices, CI/CD pipelines, and tools will be crucial for automating deployment and operations. Your problem-solving skills will be put to the test as you investigate, analyze, and efficiently resolve complex technical issues. You should have a strong aptitude for learning and applying new technologies in a fast-paced environment. Experience with cloud platforms like GCP, Azure, or AWS, as well as container technologies such as Kubernetes and Docker, will be beneficial for application deployment and orchestration. To excel in this role, you should hold a Bachelor's degree in an IT-related discipline and have 5 to 8 years of seniority in software engineering. Strong computer literacy, along with readiness for multidisciplinary training, is essential. You should possess a systematic problem-solving approach, excellent communication skills, and a sense of ownership and drive. Familiarity with data integration platforms like Dataiku or industrial data platforms like Cognite would be advantageous. As part of the Capco team, you will foster and maintain excellent relationships with internal stakeholders, clients, and third parties. Your high degree of initiative, adaptability, and willingness to learn new technologies will be key to your success. Effective communication skills, the ability to work under pressure, and a collaborative mindset will be essential for analyzing and solving problems with attention to the root cause. If you are passionate about making an impact, driving innovation, and advancing your career in a diverse and inclusive environment, then this role at Capco in Pune is the perfect opportunity for you. Join us in transforming businesses and shaping the future of energy and financial services.,
Posted 2 days ago
1.0 - 5.0 years
0 Lacs
punjab
On-site
ABOUT XENONSTACK XenonStack is the fastest-growing data and AI foundry for agentic systems, which enables people and organizations to gain real-time and intelligent business insights. We are dedicated to building Agentic Systems for AI Agents with Akira.ai, developing the Vision AI Platform with XenonStack.ai, and providing Inference AI Infrastructure for Agentic Systems through Nexastack.ai. THE OPPORTUNITY We are seeking an experienced Associate DevOps Engineer with 1-3 years of experience in implementing and reviewing CI/CD pipelines, cloud deployments, and automation tasks. If you have a strong foundation in cloud technologies, containerization, and DevOps best practices, we would love to have you on our team. JOB ROLES AND RESPONSIBILITIES - Develop and maintain CI/CD pipelines to automate the deployment and testing of applications across multiple cloud platforms (AWS, Azure, GCP). - Assist in deploying applications and services to cloud environments while ensuring optimal configuration and security practices. - Implement monitoring solutions to ensure infrastructure health and performance; troubleshoot issues as they arise in production environments. - Automate repetitive tasks and manage cloud infrastructure using tools like Terraform, CloudFormation, and scripting languages (Python, Bash). - Work closely with software engineers to integrate deployment pipelines with application codebases and streamline workflows. - Ensure efficient resource management in the cloud, monitor costs, and optimize usage to reduce waste. - Create detailed documentation for DevOps processes, deployment procedures, and troubleshooting steps to ensure clarity and consistency across the team. SKILLS REQUIREMENTS - 1-3 years of experience in DevOps or cloud infrastructure engineering. - Proficiency in cloud platforms like AWS, Azure, or GCP and hands-on experience with their core services (EC2, S3, RDS, Lambda, etc.). - Advanced knowledge of CI/CD tools such as Jenkins, GitLab CI, or CircleCI, and hands-on experience implementing and managing CI/CD pipelines. - Experience with containerization technologies like Docker and Kubernetes for deploying applications at scale. - Strong knowledge of Infrastructure-as-Code (IaC) using tools like Terraform or CloudFormation. - Proficient in scripting languages such as Python and Bash for automating infrastructure tasks and deployments. - Understanding of monitoring and logging tools like Prometheus, Grafana, ELK Stack, or CloudWatch to ensure system performance and uptime. - Strong understanding of Linux-based operating systems and cloud-based infrastructure management. - Bachelors degree in Computer Science, Information Technology, or related field. - 1-3 years of hands-on experience working in a DevOps or cloud engineering role. CAREER GROWTH AND BENEFITS Continuous Learning & Growth Access to training, certifications, and hands-on sessions to enhance your DevOps and cloud engineering skills. Opportunities for career advancement and leadership roles in DevOps engineering. Recognition & Rewards Performance-based incentives and regular feedback to help you grow in your career. Special recognition for contributions towards streamlining and improving DevOps practices. Work Benefits & Well-Being Comprehensive health insurance and wellness programs to ensure a healthy work-life balance. Cab facilities for women employees and additional allowances for project-based tasks. XENONSTACK CULTURE - JOIN US & MAKE AN IMPACT Here at XenonStack, we have a culture of cultivation with bold, courageous, and human-centric leadership principles. We value obsession and deep work in everything we do. We are on a mission to disrupt and reshape the category and welcome people with that mindset and ambition. If you are energised by the idea of shaping the future of AI in Business processes and enterprise systems, there's nowhere better for you than XenonStack. Product Value and Outcome - Simplifying the user experience with AI Agents and Agentic AI - Obsessed with Adoption: We design everything with the goal of making AI more accessible and simplifying the business processes and enterprise systems essential to adoption. - Obsessed with Simplicity: We simplify even the most complex challenges to create seamless, intuitive experiences with AI agents and Agentic AI. Be a part of XenonStack's Vision and Mission for Accelerating the world's transition to AI + Human Intelligence.,
Posted 3 days ago
8.0 - 12.0 years
0 Lacs
haryana
On-site
You will have the opportunity to join Team Amex, a global and diverse community aiming to support customers, communities, and colleagues. American Express values recognition of contributions and leadership, ensuring that every colleague can share in the company's success. As part of Team Amex, you will work collaboratively to provide the world's best customer experience every day with integrity and inclusivity. As a Site Reliability Engineering (SRE) Leader at American Express, your key responsibilities will include developing and implementing a comprehensive SRE strategy aligned with company goals. You will lead a team of SRE professionals to enhance the reliability, performance, and scalability of GRC technology solutions. Establishing observability practices for real-time insights, monitoring system performance, and driving continuous improvement initiatives will be crucial aspects of your role. You will collaborate with cross-functional teams to enhance customer journeys through seamless technology experiences and promote reliability engineering best practices within the technology landscape. Automation initiatives to streamline operational workflows, deployment processes, and incident response tasks will be championed to improve reliability and reduce manual interventions. To qualify for this role, you should have 8+ years of experience in Computer Science, Information Technology, or related fields. Advanced certifications in SRE or related areas are beneficial. Deep understanding of observability tools, methodologies, and strong leadership and people management skills are essential for success in this position. Preferred skills for this role include hands-on coding and system design experience, familiarity with modern observability stacks and cloud-based SRE practices. Furthermore, expertise in driving culture change, DevOps practices, and continuous improvement in SRE and production support functions will be advantageous. Join the innovative team at American Express to lead advancements in Site Reliability Engineering and production support in the Global Risk and Compliance Technology sector. If you are passionate about driving reliability, observability, and excellence in customer experiences, apply now to redefine the future of risk and compliance technology. American Express offers competitive base salaries, bonus incentives, and comprehensive benefits to support your holistic well-being and career development opportunities. Take the opportunity to shape the reliability and performance of GRC solutions for a secure and compliant world with American Express. Apply now and be a part of a team that prioritizes the well-being of its colleagues and their loved ones through various benefits and programs.,
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As an AI/ML Manager at a high-end start-up AI Strategy and Consulting company in Mumbai/Bengaluru, you will lead and mentor a team of AI/ML consultants. Your main responsibilities will include driving the design, development, and implementation of AI copilots to enhance organizational efficiency. You will be involved in data pre-processing, feature engineering, model training, and optimizing machine learning algorithms. Furthermore, you will integrate AI copilots into existing systems, analyze data to provide insights, and collaborate with cross-functional teams to address business requirements. Working closely with clients, you will tailor solutions to their specific needs, offer technical guidance, and support the project lifecycle. Your role will also involve driving continuous improvement initiatives, identifying areas for optimization and innovation to maximize client value. To excel in this role, you should have at least 4 years of experience in consulting, project management, or related roles, with a focus on healthcare and/or CPG industries. Experience in natural language processing (NLP) and/or computer vision (CV) would be advantageous. Additionally, knowledge of containerization technologies, microservices architecture, and DevOps practices for continuous integration and deployment is preferred. You are expected to have a deep understanding of data analytics, data strategy, and digital transformation methodologies, with practical experience in implementing data-driven solutions. Familiarity with AI technologies, machine learning algorithms, and NLP techniques is essential. If you have contributed to open-source projects or publications in relevant conferences/journals, it would be a plus. As part of the selection process, you will undergo 2 technical assessments followed by an HR discussion. Your expertise, best practices, and mentoring skills will contribute to the growth and development of the organization. Join us in reimagining workflows, modernizing data infrastructure, and delivering tailor-made AI solutions to our clients.,
Posted 3 days ago
7.0 - 11.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Principal Site Reliability Engineer at UKG, you play a crucial role in enhancing and supporting service delivery processes through developing software solutions. Your responsibilities include building and managing CI/CD deployment pipelines, automated testing, capacity planning, performance analysis, monitoring, alerting, chaos engineering, and auto remediation. It is essential to stay updated with current technology trends, innovate, and ensure a flawless customer experience by deploying services efficiently and consistently. Your primary duties involve engaging in the lifecycle of services, defining and implementing standards, supporting product and engineering teams, improving system performance, collaborating with professionals, guiding junior team members, and actively participating in incident response. You will also contribute to increasing operational efficiency, effectiveness, and service quality by treating operational challenges as software engineering problems. To be successful in this role, you should have an engineering degree or a related technical discipline, along with at least 7 years of hands-on experience in Engineering or Cloud. Additionally, a minimum of 5 years" experience with public cloud platforms, 3 years" experience in configuration and maintenance of applications/systems infrastructure, and knowledge of coding in higher-level languages are required. Familiarity with Cloud-based applications, Containerization Technologies, industry standards like Terraform, Ansible, and fundamentals in Computer Science, Cloud Architecture, Security, or Network Design is also essential. UKG is a dynamic organization at the forefront of workforce management and human capital management solutions. As we continue to grow and innovate, we are committed to fostering diversity and promoting inclusion in the workplace. If you require any disability accommodations during the application and interview process, please reach out to UKGCareers@ukg.com. Join us on this exciting journey towards a brighter tomorrow!,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Python + Angular Developer, you will be responsible for developing scalable and maintainable applications using Python and Angular technologies. With a strong background in backend development, you will design, deploy, and maintain microservices-based architectures while ensuring adherence to DevOps practices. Your key responsibilities will include: - Demonstrating expertise in Python and/or Java with a focus on building scalable applications. - Utilizing your hands-on experience in Angular to develop modern, responsive web applications. - Designing and deploying microservices architectures while understanding CI/CD pipelines and automation tools. - Investigating and resolving complex technical issues efficiently. - Demonstrating adaptability by learning and applying new technologies in a fast-paced environment. - Utilizing your knowledge of cloud platforms such as GCP, Azure, or AWS for application development. - Working with container technologies like Kubernetes and Docker for application deployment and orchestration. To qualify for this role, you should possess: - A Bachelor's degree in an IT-related discipline. - Strong computer literacy and a willingness to undergo multidisciplinary training. - 5-8 years of seniority in a similar role with a hands-on approach. - Strong software engineering skills and interest in troubleshooting large-scale distributed systems. - Excellent problem-solving abilities and communication skills along with a sense of ownership and drive. - Ability to debug and optimize code, automate routine tasks, and familiarity with data integration platforms like Dataiku or industrial data platforms like Cognite. In addition, you should exhibit the following behaviors: - Foster and maintain positive relationships with internal, client, and third-party stakeholders. - Show initiative, adaptability, and a willingness to learn new technologies. - Work effectively under pressure, communicate well, and practice effective listening techniques. - Work independently or as part of a team while effectively analyzing and solving problems with attention to detail.,
Posted 4 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Lead Cloud Engineer at our organization, you will be responsible for designing and building cloud-based distributed systems to address complex business challenges for some of the world's largest companies. Leveraging your expertise in software engineering, cloud engineering, and DevOps, you will craft technology stacks and platform components that empower cross-functional AI Engineering teams to develop robust, observable, and scalable solutions. Working as part of a diverse and globally distributed engineering team, you will actively engage in the complete engineering life cycle, encompassing the design, development, optimization, and deployment of solutions and infrastructure at a scale that matches the world's leading companies. Your core responsibilities will include: - Architecting cloud solutions and distributed systems for full-stack AI software and data solutions - Implementing, testing, and managing Infrastructure as Code (IAC) for cloud-based solutions, covering areas such as CI/CD, data integrations, APIs, web and mobile apps, and AI solutions - Defining and implementing scalable, observable, manageable, and self-healing cloud-based solutions across AWS, Google Cloud, and Azure - Collaborating with diverse teams, including product managers, data scientists, and other engineers, to deliver analytics and AI features that align with business requirements and user needs - Utilizing Kubernetes and containerization technologies to deploy, manage, and scale analytics applications in the cloud, ensuring optimal performance and availability - Developing and maintaining APIs and microservices to expose analytics functionality to internal and external consumers while adhering to best practices for API design and documentation - Implementing robust security measures to safeguard sensitive data and ensure compliance with data privacy regulations and organizational policies - Monitoring and troubleshooting application performance continuously to identify and resolve issues affecting system reliability, latency, and user experience - Participating in code reviews and contributing to the establishment and enforcement of coding standards and best practices to uphold the quality and maintainability of the codebase - Staying abreast of emerging trends and technologies in cloud computing, data analytics, and software engineering to identify opportunities for enhancing the analytics platform's capabilities - Collaborating closely with business consulting staff and leaders to assess opportunities and develop analytics solutions for clients across various sectors To excel in this role, you should possess the following qualifications: - A Master's degree in Computer Science, Engineering, or a related technical field - At least 6 years of experience, with a minimum of 3 years at the Staff level or equivalent - Proven experience as a cloud engineer and software engineer in product engineering or professional services organizations - Experience in designing and delivering cloud-based distributed solutions, with certifications in GCP, AWS, or Azure considered advantageous - Proficiency in building infrastructure as code using tools such as Terraform (preferred), Cloud Formation, Pulumi, AWS CDK, or CDKTF - Familiarity with software development lifecycle nuances - Experience with configuration management tools like Ansible, Salt, Puppet, or Chef - Proficiency in monitoring and analytics platforms such as Grafana, Prometheus, Splunk, SumoLogic, NewRelic, DataDog, CloudWatch, or Nagios/Icinga - Expertise in CI/CD deployment pipelines (e.g., Github Actions, Jenkins, Travis CI, Gitlab CI, Circle CI) - Hands-on experience in building backend APIs, services, and integrations using Python - Practical experience with Kubernetes through services like GKE, EKS, or AKS considered a plus - Ability to collaborate effectively with internal and client teams and stakeholders - Proficiency in using Git for versioning and collaboration - Exposure to LLMs, Prompt engineering, Langchain considered advantageous - Experience with workflow orchestration tools such as dbt, Beam, Airflow, Luigy, Metaflow, Kubeflow, or others - Proficiency in implementing large-scale structured or unstructured databases, orchestration, and container technologies like Docker or Kubernetes - Strong interpersonal and communication skills to articulate and discuss complex engineering concepts with colleagues and clients from diverse disciplines - Display curiosity, proactivity, and critical thinking in problem-solving - Solid foundation in computer science principles related to data structures, algorithms, automated testing, object-oriented programming, performance complexity, and the impact of computer architecture on software performance - Knowledge of designing API interfaces and data architecture, database schema design, and database scalability - Familiarity with Agile development methodologies If you are seeking a dynamic and challenging opportunity to contribute to cutting-edge projects and collaborate with a diverse team of experts, we invite you to join us at Bain & Company. As a global consultancy dedicated to partnering with change makers worldwide, we are committed to achieving extraordinary results, outperforming the competition, and reshaping industries. With a focus on delivering tailored, integrated solutions and leveraging a network of digital innovators, we strive to drive superior outcomes that endure. Our ongoing investment in pro bono services underscores our dedication to supporting organizations addressing pressing issues in education, racial equity, social justice, economic development, and the environment. Recognized with a platinum rating from EcoVadis, we are positioned in the top 1% of all companies for our environmental, social, and ethical performance. Since our inception in 1973, we measure our success by the success of our clients and maintain the highest level of client advocacy in the industry.,
Posted 4 days ago
3.0 - 7.0 years
0 Lacs
udaipur, rajasthan
On-site
We are looking for a talented and experienced DevOps Engineer to join our dynamic IT services team. As a DevOps Engineer, you will be responsible for automating and streamlining our software development and deployment processes using AWS cloud technology. Your role will involve collaborating with development, testing, and operations teams to ensure efficient and reliable delivery of our products and services. Your responsibilities will include: - Infrastructure Automation: Building and maintaining automated infrastructure provisioning and management using AWS services such as EC2, S3, VPC, CloudFormation, and Terraform. - Continuous Integration/Continuous Delivery (CI/CD): Implementing and managing CI/CD pipelines with tools like Jenkins, GitLab CI/CD, or AWS CodePipeline to automate the build, test, and deployment process. - Configuration Management: Implementing and maintaining configuration management solutions like Ansible, Chef, or Puppet to ensure consistent and reproducible environments. - Monitoring and Logging: Setting up and maintaining monitoring and logging tools like CloudWatch, Datadog, and New Relic to track application and infrastructure performance. - Troubleshooting and Support: Diagnosing and resolving issues related to infrastructure, application deployment, and performance. - Collaboration: Working closely with development, testing, and operations teams to ensure smooth collaboration and efficient delivery. - Security: Implementing security best practices and adhering to compliance requirements. Qualifications: - Bachelor's degree in Computer Science, Engineering, or a related field. - 3+ years of experience in DevOps or a related field. - Strong proficiency in AWS services (EC2, S3, VPC, CloudFormation, Lambda, etc.). - Experience with CI/CD pipelines and tools (Jenkins, GitLab CI/CD, AWS CodePipeline). - Experience with configuration management tools (Ansible, Chef, Puppet). - Knowledge of scripting languages (Python, Bash). - Understanding of networking, security, and virtualization concepts. - Experience with containerization technologies (Docker, Kubernetes). - Strong problem-solving and troubleshooting skills. - Excellent communication and collaboration skills. Preferred Qualifications: - AWS Certified DevOps Engineer - Professional. - Experience with cloud-native applications and microservices architecture. - Knowledge of DevOps methodologies and practices. - Experience with serverless computing and AWS Lambda. - Experience with monitoring and logging tools (CloudWatch, Datadog, New Relic). If you are a passionate DevOps Engineer with a strong understanding of AWS and a commitment to delivering high-quality software, we encourage you to apply. This is a full-time position with benefits including Provident Fund. The work schedule is Monday to Friday, and the work location is in person.,
Posted 4 days ago
12.0 - 16.0 years
0 Lacs
pune, maharashtra
On-site
The Senior Backend Subject Matter Expert (SME) position at Deutsche Bank in Pune/Bangalore is a key role in the ambitious sustainability initiatives undertaken by the bank. The bank aims to invest in developing a Sustainability Technology Platform, Sustainability data products, and various sustainability applications to support its goals. As part of this global initiative, the bank is assembling a team of passionate technologists who are dedicated to addressing Climate Change challenges through technology solutions, particularly in Cloud/Hybrid Architecture. As a Senior Full Stack SME, you will play a crucial role in providing technical leadership and guidance to ensure project success in JAVA Backend development. Your responsibilities will include designing APIs for an API-first platform, working with hybrid cloud architecture, and providing solutions that leverage Google Cloud Platform (GCP). You will also be involved in code reviews, mentoring teams, and staying updated with emerging trends in GCP services and application development frameworks. In addition to technical responsibilities, you will collaborate with business stakeholders to translate requirements into technical solutions, develop a technology roadmap aligning with business goals, and analyze the feasibility of migrating on-premise systems to GCP. Mentorship, collaboration, problem-solving, and innovation are key aspects of the role, along with troubleshooting technical issues, recommending improvements, and exploring innovative solutions using GCP services. To qualify for this position, you should have at least 12 years of experience in full-stack software development, with a proven track record of successful software projects using GCP and on-premise environments. Proficiency in containerization technologies, excellent communication skills, and the ability to think strategically are essential. Knowledge of Microservices, Spring Cloud, Spring Security, Concurrency, Event Driven Architecture, and other specified technologies/frameworks is required. Experience in Sustainable Finance/ESG Risk/CSRD/Regulatory Reporting, infrastructure automation, and DevOps principles on GCP will be advantageous. Deutsche Bank offers a range of benefits including best-in-class leave policy, parental leaves, education sponsorships, insurance coverage, and health screening. Training, coaching, and a culture of continuous learning are provided to support your career growth and development. If you are a passionate technologist with a strong background in full-stack development and a desire to contribute to sustainable finance initiatives, this role offers an exciting opportunity to make a difference and drive innovation within a global team at Deutsche Bank.,
Posted 4 days ago
8.0 - 12.0 years
0 Lacs
hyderabad, telangana
On-site
This position is pivotal in identifying and incubating cutting-edge AI technologies that align with the strategic goals of the company, enhancing the capabilities in data-driven decision-making and is crucial in defining and promoting best practices in AI model development and deployment. As an AI Engineer, you will ensure seamless integration of innovative AI solutions into existing frameworks, making sure they are scalable, reliable, and tailored to meet the unique demands of the pharmaceutical industry. Your contributions will be instrumental in advancing healthcare through technology, ultimately improving patient outcomes and driving business success. You will be responsible for understanding complex and critical business problems, formulating an integrated analytical approach to mine data sources, employing statistical methods and machine learning algorithms to contribute to solving unmet medical needs, discovering actionable insights, and automating processes for reducing effort and time for repeated use. Your role will involve architecting and developing end-to-end AI/ML and Gen AI solutions, focusing on scalability, performance, and modularity while ensuring alignment and best practices with enterprise architecture standards. Additionally, you will manage the implementation and adherence to the overall data lifecycle of enterprise data from data acquisition or creation through enrichment, consumption, retention, and retirement. This will enable the availability of useful, clean, and accurate data throughout its useful lifecycle. Your high agility will allow you to work across various business domains, integrating business presentations, smart visualization tools, and contextual storytelling to translate findings back to business users with a clear impact. You will also independently manage the budget, ensuring appropriate staffing and coordinating projects within the area while collaborating with globally dispersed internal stakeholders and cross-functional teams to solve critical business problems and deliver successfully on high visibility strategic initiatives. The ideal candidate for this role will have an advanced degree in Computer Science, Engineering, or a related field (PhD preferred) and at least 8 years of experience in AI/ML engineering, with a focus on designing and deploying LLM-based solutions for a minimum of 2 years. Strong proficiency in building AI/ML architectures and deploying models at scale, experience in cloud computing platforms such as AWS, Google Cloud, or Azure, deep knowledge of LLMs, containerization technologies (Docker, Kubernetes), CI/CD pipelines, and hands-on experience with cloud platforms and MLOps tools for scalable deployment are essential requirements. Additionally, experience with API development, integration, model deployment pipelines, strong problem-solving skills, proactive approach to challenges, effective communication in cross-functional teams, and excellent organizational skills with attention to detail in managing complex systems are necessary for this role. Novartis is committed to building an outstanding, inclusive work environment and diverse teams representative of the patients and communities served. If you require accommodations due to a medical condition or disability, please reach out to diversityandincl.india@novartis.com. This role at Novartis offers the opportunity to be part of a community of smart, passionate individuals dedicated to creating breakthroughs that change patients" lives. If you are ready to contribute to a brighter future, visit https://www.novartis.com/about/strategy/people-and-culture to learn more about Novartis and its commitment to diversity and inclusion.,
Posted 4 days ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
As the leader of the devops and system administration team at Kensaltensi powered by Alkimi, you will play a crucial role in ensuring the efficient and reliable operation of our technical infrastructure. Your responsibilities will include providing architectural guidance, managing security vulnerabilities, building and leading a team of experts, and overseeing the deployment and maintenance of cloud infrastructure. You will be accountable for the uptime and reliability of all production infrastructure, as well as ensuring cost-effectiveness and optimal configurations. Your role will involve working closely with engineering teams to deliver platform services that are robust, secure, and scalable. You will be expected to have a deep understanding and experience in software development, infrastructure as code (IaC), system administration, and continuous integration. Proficiency in CI/CD tools, relational and NO-SQL databases, containerization technologies, and cloud platforms such as AWS, Digital Ocean, Azure, or GCP is essential. Additionally, strong communication and collaboration skills are key to effectively working across teams and providing troubleshooting support for production issues. In this role, you will have the opportunity to shape the technical architecture of our products, ensure business continuity through disaster recovery planning, and maintain the security and protection of the company's data and databases. Your expertise and leadership will be instrumental in creating a high-performing DevOps department that delivers services with high reliability and quick turn-around time. If you are passionate about infrastructure management, security, and innovation, and possess the necessary skills and experience, we invite you to join our team and contribute to our mission of restoring the value exchange in the advertising industry.,
Posted 4 days ago
1.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Senior .NET/GCP Engineer at NTT DATA, you will be part of a team dedicated to delivering high-quality software solutions for our clients. Your primary focus will be on .NET C# development, where you will play a crucial role in the full systems life cycle management process, from analyzing technical requirements to designing, coding, testing, and implementing software applications. Joining our Microsoft practice not only offers you a job but also provides you with ample opportunities for career growth. We are committed to equipping you with the necessary skills to develop robust applications that meet the highest standards. Whether it involves training in a new programming language or obtaining certifications in cutting-edge technologies, we will support your professional development to ensure you consistently deliver exceptional work. In this role, you will collaborate with cross-functional teams, contribute to component and data architecture design, engage in technology planning, and participate in testing activities for Applications Development (AD) initiatives. Your insights and expertise will inform applications development project plans and integrations, as well as support the adoption of emerging technologies to enhance communication and achieve project objectives effectively. Basic Qualifications: - 6+ years of experience in .Net/.Net Core development - Proficiency in Object-Oriented Programming and SOLID Principles with at least 3 years of experience - Hands-on experience in Rest API development for a minimum of 3 years - 2+ years of practical exposure to Google Cloud Platform (GCP) including pub sub, cloud functions, etc. - Familiarity with database operations and writing stored procedures for at least 2 years - Experience in unit and service testing using frameworks like xunit, Nunit, etc. for a minimum of 2 years - 1+ year of experience working on cloud platforms such as AWS, Azure, or GCP Preferred Qualifications: - Proficiency with CI/CD tooling such as Jenkins, Azure DevOps, etc. - Knowledge of containerization technologies like Docker, Kubernetes Ideal Mindset: - Lifelong Learner: Continuously seeking opportunities to enhance both technical and non-technical skills - Team Player: Committed to supporting team success and willing to assist teammates when needed - Communicator: Capable of effectively conveying design concepts to technical and non-technical stakeholders, prioritizing essential information while eliminating unnecessary details Please note Shift Timing Requirement: 1:30 pm IST - 10:30 pm IST Join us at NTT DATA, a trusted global innovator with a mission to help clients innovate, optimize, and transform for long-term success. As a Global Top Employer, we offer a diverse pool of experts across 50+ countries and a strong partner ecosystem. Our services span business and technology consulting, data and artificial intelligence, industry-specific solutions, and the development, implementation, and management of applications, infrastructure, and connectivity. With a significant focus on digital and AI infrastructure, we are at the forefront of driving organizations confidently into the digital future.NTT DATA is part of the NTT Group, investing over $3.6 billion annually in R&D to support sustainable digital transformation for organizations and society. Explore more at us.nttdata.com.,
Posted 4 days ago
1.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
NTT DATA is looking for an exceptional Senior .NET Integration Engineer to join their team in Chennai, Tamil Nadu, India. As a Senior Integration Engineer, you will primarily focus on .NET C# development and work with a team dedicated to delivering high-quality software for clients. In this role, you will be involved in the full systems life cycle management activities, including analyzing technical requirements, design, coding, testing, and implementing systems and applications software. Additionally, you will participate in component and data architecture design, technology planning, and testing for Applications Development (AD) initiatives to meet business requirements. Collaboration with teams and support for emerging technologies will be essential to ensure effective communication and the achievement of objectives. Basic qualifications for this position include at least 6 years of experience in developing in .Net/.Net Core, 3+ years of experience with Object-Oriented Programming and SOLID Principles, 3+ years of Rest API development, 2+ years of experience working with Databases and writing stored procedures, 2+ years of unit and service testing with frameworks such as xunit, Nunit, etc., and 1+ year of cloud platform experience in AWS, Azure, or GCP. Preferred qualifications include experience with CI/CD tooling such as Jenkins, Azure DevOps, etc., experience with containerization technologies like Docker, Kubernetes, and GCP experience. The ideal candidate for this role is a lifelong learner who is constantly seeking to improve technical and non-technical skills, a team player who is willing to go the extra mile to help teammates succeed, and a strong communicator capable of effectively conveying design ideas to both technical and non-technical stakeholders. Please note that the shift timing requirement for this position is from 1:30 pm IST to 10:30 pm IST. If you are looking for an opportunity to grow your career and work with cutting-edge technologies, apply now to join the NTT DATA team in Chennai, Tamil Nadu, India.,
Posted 4 days ago
2.0 - 6.0 years
0 Lacs
ghaziabad, uttar pradesh
On-site
As a skilled Python Backend Engineer at Cognio Labs, you will be responsible for leveraging your expertise in FastAPI and your strong foundation in Large Language Models (LLMs) and Retrieval-Augmented Generation (RAG) technologies. Your role will involve a blend of backend development and data science to facilitate data processing for model fine-tuning and training. You should have a minimum of 2 years of experience in Python backend development and possess the ability to develop and maintain APIs using the FastAPI framework. Proficiency in asynchronous programming, background task implementation, and database management using both SQL and NoSQL databases, especially MongoDB, are essential. Additionally, familiarity with Git version control systems and RESTful API design and implementation is required. Experience with containerization technologies like Docker, understanding of component-based architecture principles, and the capability to write clean, maintainable, and testable code are valuable additional technical skills. Knowledge of testing frameworks, quality assurance practices, and AI technologies such as LangChain, ChatGPT endpoints, and other LLM frameworks will be advantageous. In the realm of AI and Data Science, your experience with LLMs and RAG implementation will be highly valued. You should be adept at data processing for fine-tuning language models, manipulating and analyzing data using Python libraries such as Pandas and NumPy, and implementing machine learning workflows efficiently. Your key responsibilities will include designing, developing, and maintaining robust, scalable APIs using the FastAPI framework, preparing data for model fine-tuning and training, implementing background tasks and asynchronous processing for system optimization, integrating LLM and RAG-based solutions into the product ecosystem, and following industry best practices to write efficient, maintainable code. Collaboration with team members, database design and implementation, troubleshooting and debugging codebase issues, as well as staying updated on emerging technologies in Python development, LLMs, and data science will be integral parts of your role at Cognio Labs.,
Posted 6 days ago
1.0 - 6.0 years
0 Lacs
chennai, tamil nadu
On-site
NTT DATA is looking to hire a Senior .NET Developer to join their team in Chennai, Tamil Ndu, India. As a Senior Application Developer specializing in .NET C# development, you will be responsible for delivering quality software for clients. Joining the Microsoft practice at NTT DATA is not just a job but an opportunity to grow your career. You will receive training on new programming languages and technologies to enhance your skills and deliver valuable work. In this role, you will provide input and support for full systems life cycle management activities, including analyses, technical requirements, design, coding, testing, and implementation of systems and applications software. You will participate in component and data architecture design, technology planning, and testing for Applications Development initiatives to meet business requirements. Collaboration with teams and support for emerging technologies will be essential to ensure effective communication and achievement of objectives. Basic qualifications for this position include: - 6+ years of experience developing in .Net/.Net Core - 3+ years of experience with Object-Oriented Programming and SOLID Principles - 3+ years of Rest API development - 2+ years of experience working with databases and writing stored procedures - 2+ years of unit and service testing with frameworks such as xunit, Nunit, etc. - 1+ year of cloud platform experience in AWS, Azure, or GCP Preferred qualifications include experience with CI/CD tooling, containerization technologies, and GCP. The ideal candidate for this role is a lifelong learner, a team player, and an effective communicator. Please note that the shift timing requirement for this position is from 1:30 pm IST to 10:30 pm IST. NTT DATA is a global innovator of business and technology services, serving 75% of the Fortune Global 100. They are committed to helping clients innovate, optimize, and transform for long-term success. With diverse experts in more than 50 countries and a robust partner ecosystem, NTT DATA offers business and technology consulting, data and artificial intelligence services, industry solutions, and application, infrastructure, and connectivity management. As a leading provider of digital and AI infrastructure, NTT DATA is part of the NTT Group, investing over $3.6 billion annually in R&D to support organizations and society in transitioning confidently into the digital future. You can learn more about NTT DATA at us.nttdata.com.,
Posted 6 days ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
The SOA Lead Engineer will be responsible for leading the design, development, and implementation of microservices using Golang within the SOA platform. You will be expected to promote a culture of code quality, clean architecture, and test-driven development among team members. Additionally, accurate and thorough documentation of processes to contribute to a knowledge repository will be a key aspect of your role. Collaboration with architects, developers, and stakeholders to define and refine the SOA strategy and roadmap will also be essential. Required Skills: - Strong proficiency in any backend programming language. - Proficiency in designing and implementing DB Schemas. - Experience in designing, developing, and implementing SOA-based solutions. - In-depth understanding of API design principles and best practices. - Exceptional problem-solving and analytical skills. Preferred Skills: - Experience with programming languages such as Golang (Preferred), PHP, Java, Node.js. - Familiarity with containerization technologies like Docker and Kubernetes. - Knowledge of message brokers such as RabbitMQ, Kafka, etc. - Expertise in cloud platforms like AWS, Azure, or Google Cloud. - Familiarity with logging and monitoring tools like Prometheus or Grafana. - Understanding of CI/CD pipelines for SOA platform deployments. Qualification: - Full-time B.Tech (CS/IT) only. Experience: - 5-7 years of relevant experience.,
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
As an AI Architect at our organization, you will play a crucial role in defining and implementing the end-to-end architecture for deploying our machine learning models, including advanced Generative AI and LLM solutions, into production. Your responsibilities will include leading and mentoring a cross-functional team of Data Scientists, Backend Developers, and DevOps Engineers to foster a culture of innovation, technical excellence, and operational efficiency. In terms of Architectural Leadership, you will design, develop, and own the scalable, secure, and reliable architecture for deploying and serving ML models with a focus on real-time inference and high availability. You will also lead the strategy and implementation of the in-house API wrapper infrastructure and define architectural patterns, best practices, and governance for MLOps. Evaluating and selecting the optimal technology stack for our ML serving infrastructure will also be a key part of your role. Regarding Team Leadership & Mentorship, you will lead, mentor, and inspire the diverse team, guiding them through complex architectural decisions and technical challenges. Your goal will be to foster a collaborative environment that encourages knowledge sharing, continuous learning, and innovation across teams while driving technical excellence and adherence to engineering best practices. Your expertise in Generative AI & LLM will be essential as you architect and implement solutions for deploying Large Language Models, drive the adoption of techniques like Retrieval Augmented Generation, and stay updated on the latest advancements in AI to evaluate their applicability to our business needs. Collaborating closely with Data Scientists, Backend Developers, and DevOps Engineers will be crucial to integrate models seamlessly into the serving infrastructure, build robust APIs, and ensure operational excellence of the AI infrastructure. Effective communication of complex technical concepts to both technical and non-technical stakeholders will also be a part of your responsibilities. In terms of qualifications, you should have a Bachelor's or Master's degree in Computer Science, Machine Learning, Data Science, or a related field, along with 10+ years of software engineering experience and proven experience in leading cross-functional engineering teams. Your technical skills should include expertise in MLOps principles, Python proficiency, containerization technologies, cloud platforms, Large Language Models, and monitoring tools. Leadership qualities such as exceptional mentorship and team-building abilities, strong analytical and problem-solving skills, excellent communication skills, and a strategic mindset will be highly valued in this role. Bonus points will be awarded for experience with specific ML serving frameworks, contributions to open-source projects, and familiarity with data governance and compliance in an AI context.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be part of our expanding Technology team at ShyftLabs, where we are seeking an AI Engineer to develop and implement AI-powered applications at scale. As a pragmatic problem solver with strong backend engineering skills, you will apply AI concepts and frameworks to design, develop, and optimize solutions that address complex business challenges for our clients. ShyftLabs is a leading data and AI company specializing in data platforms, machine learning models, and AI-powered automation. We provide consulting, prototyping, solution delivery, and platform scaling services to Fortune 500 clients, helping them transform their data into actionable insights. Your responsibilities will include researching, designing, and developing AI-powered applications, integrating AI models into scalable production environments, collaborating with cross-functional teams, optimizing AI implementations for performance and cost efficiency, ensuring AI systems meet quality standards, and staying updated with the latest AI tools and frameworks for proposing innovative solutions. Documenting technical processes and implementations will also be crucial for knowledge sharing within the team. To qualify for this role, you should have a Bachelor's or Master's degree in Computer Science, Machine Learning, AI, or a related field, with a strong software engineering background and at least 3 years of experience building applications at scale. You must have proven experience in implementing and deploying AI solutions in production environments, proficiency in programming languages like Python, JavaScript, or Java, familiarity with modern AI frameworks and tools, understanding of software design patterns, API development, and system architecture, knowledge of containerization technologies and cloud platforms for AI development, strong problem-solving skills, and the ability to translate business requirements into technical solutions. At ShyftLabs, we offer a competitive salary and a comprehensive insurance package. We are committed to the growth of our employees and provide extensive learning and development resources to support their professional development.,
Posted 1 week ago
7.0 - 11.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As a DevOps Architect at our Coimbatore onsite location, with over 7 years of experience, you will play a crucial role in designing, implementing, and optimizing scalable and reliable DevOps processes for continuous integration, continuous deployment (CI/CD), and infrastructure as code (IaC). You will lead the architecture and implementation of cloud-based infrastructure solutions using AWS, Azure, or GCP based on project requirements. Collaboration with software development teams to ensure smooth integration of development, testing, and production environments will be a key responsibility. Your role will involve implementing and managing automation, monitoring, and alerting tools across development and production environments such as Jenkins, GitLab CI, Ansible, Terraform, Docker, and Kubernetes. Additionally, you will oversee version control, release pipelines, and deployment processes for various applications while designing and implementing infrastructure monitoring solutions to maintain high availability and performance of systems. A significant aspect of your role will involve fostering a culture of continuous improvement by closely working with development and operations teams to enhance automation, testing, and release pipelines. You will ensure that security best practices are followed in the development and deployment pipeline, including secret management and vulnerability scanning. Efforts to address performance bottlenecks, scaling challenges, and infrastructure optimization will be led by you, along with mentoring and guiding junior engineers in the DevOps space. To excel in this role, you are required to have a Bachelor's degree in computer science, Information Technology, or related field, or equivalent work experience, along with a minimum of 7 years of experience in DevOps, cloud infrastructure, and automation tools. Proficiency in cloud platforms such as AWS, Azure, GCP, containerization technologies like Docker and Kubernetes, orchestration tools, automation tools like Jenkins, Ansible, Chef, Puppet, Terraform, scripting languages (Bash, Python, Go), version control systems (Git, SVN), and monitoring and logging tools is essential. Strong troubleshooting skills, communication, leadership abilities, and understanding of Agile and Scrum methodologies are also vital for this role. Preferred qualifications include certifications in DevOps tools, cloud technologies, or Kubernetes, experience with serverless architecture, familiarity with security best practices in a DevOps environment, and knowledge of database management and backup strategies. If you are passionate about your career and possess the required skills and experience, we invite you to be a part of our rapidly growing team. Reach out to us at careers@hashagile.com to explore exciting opportunities with us.,
Posted 1 week ago
2.0 - 10.0 years
0 Lacs
karnataka
On-site
As an MLOps Engineer at our Bangalore location, you will play a pivotal role in designing, developing, and maintaining robust MLOps pipelines for generative AI models on AWS. With a Bachelor's or Master's degree in Computer Science, Data Science, or a related field, you should have at least 2 years of proven experience in building and managing MLOps pipelines, preferably in a cloud environment like AWS. Your responsibilities will include implementing CI/CD pipelines to automate model training, testing, and deployment workflows. You should have a strong grasp of containerization technologies such as Docker, container orchestration platforms, and AWS services like SageMaker, Bedrock, EC2, S3, Lambda, and CloudWatch. Practical knowledge of CI/CD principles and tools, along with experience working with large language models, will be essential for success in this role. Additionally, your role will involve driving technical discussions, explaining options to both technical and non-technical audiences, and ensuring software product cost monitoring and optimization. Proficiency in Python and deep learning frameworks like PyTorch or TensorFlow, familiarity with generative AI models, and experience with infrastructure-as-code tools like Terraform or CloudFormation will be advantageous. Moreover, knowledge of model monitoring and explainability techniques, various data storage and processing technologies, and experience with other cloud platforms like GCP will further enhance your capabilities. Any contributions to open-source projects related to MLOps or machine learning will be a valuable asset. At CGI, we believe in ownership, teamwork, respect, and belonging. Your work as an MLOps Engineer will focus on turning meaningful insights into action, with opportunities to develop innovative solutions, build relationships with teammates and clients, and access global capabilities to scale your ideas. Join us in shaping your career at one of the largest IT and business consulting services firms globally.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough