Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
6.0 - 10.0 years
0 Lacs
Chennai
On-site
6-10 years hands on experience In depth knowledge and application of Java, J2EE, JDBC, Spring Framework, Struts framework, EJB and JavaScript Proficient understanding of web markup, including Web socket, HTML5 and CSS3 Experience in developing AJAX interfaces with AJAX libraries and frameworks (e.g. JQuery, AngularJS, etc.) Understanding of web services technologies such as Spring boot Micro services , REST, SOAP, HTTP, JSON Thorough understanding of usage of the fundamental concepts like Exception Handling, Static block/variables/classes , OOPS concepts, Collections, Multi-Threading, JDBC Exposure to an industry-standard database (Oracle or Sybase) on a UNIX platform with awareness of database design and SQL scripting knowledge and performance tuning Awareness of application servers /webservers (Weblogic , JBoss and iPlanet) Work experience in testing frameworks like Junit, TestNG Work experience on the transactional applications for low latency, high availability uses is a plus Ability to work in a fast-paced and agile development environment and to learn new frameworks/stacks Swings/.Net/C# knowledge is a plus Linux commands, Shell Scripting, Design Patterns pro-active in taking new enhancements and should be able to guide the juniors Good knowledge and hands on experience in developing & troubleshooting Java based applications. Building Core Interface components for enterprise Applications. Experience in analysing the application flows and modules as per the requirements Applying possible design patterns based on design discussions during development phases Should be able deliver the bug-free & quality code in to INTG / SIT environment Should be able to write 100% Junit test cases for all the functionality introduced Should be able to write integration tests as required for testing the modules independently Experienced in application debugging, analysing and fixing the issues in short time About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Chennai
On-site
Sr Application Java developer with strong technical knowledge and 6 to 10 years of experience in designing, developing and supporting web based applications using Java technologies. Candidate must have strong experience in Java, J2EE, JavaScript, APIs, Microservices, building APIs and SQL along with excellent verbal and written communication skills. The candidate should have a good experience in developing APIs with the expected output structure and high performance. Should be experienced in implementing APIs based on enterprise-level architecture frameworks and guidelines Writing well designed, testable, efficient Backend, Middleware code and building APIs using Java (e.g Hibernate, Spring) Strong experience in designing and developing high-volume and low Latency REST APIs especially based on relational databases such as SQL Server Should be able to build API from scratch based on traditional DB and provide JSON output in requested structure Develop technical designs for application development/Web API Conducting software analysis, programming, testing, and debugging Designing and implementing relational schema in Microsoft SQL, and Oracle. Debugging application/system errors on development, QA and production systems; About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
P1-C3-STS JD Experience: 12+ overall IT, 5+ in architecture, 3+ in AI/ML Overview AI Solutions Architect Role to drive AI innovation at a major aluminum manufacturing client focused on helping shape strategy, evaluate GenAI and ML tools, and design solutions that bring real business value. This role requires strong technical expertise, business acumen, and the ability to work with both senior executives and technical teams. Key Responsibilities Architect AI/ML and GenAI solutions for manufacturing and supply chain use cases like predictive maintenance, quality assurance, production optimization, and worker safety. Evaluate and recommend LLMs (open-source and commercial), tools for RAG (Retrieval-Augmented Generation), Agentic AI, vector databases, and orchestration frameworks like LangChain or Semantic Kernel. Stay current with trends like fine-tuning, prompt engineering, model compression, low-rank adaptation (LoRA), and synthetic data generation. Understand security, privacy, and compliance concerns related to AI, including data protection laws (e.g., GDPR), model misuse, and global regulatory trends like the EU AI Act. Lead assessments, POCs, and design reviews ask the right questions, challenge assumptions, and drive actionable decisions. Align AI solutions with enterprise architecture principles security, scalability, integration, and governance. Collaborate with data, app, and business teams to ensure robust pipelines, MLOps, and lifecycle management for deployed models. Understand and address manufacturing specific concerns like downtime, latency, uptime, process precision, and compliance. Qualifications 12+ years of overall experience in IT 5+ years in solution architecture roles experience across both business-facing and technical domains. 3+ years hands on experience with AI/ML, including GenAI use cases. Deep familiarity with open-source LLMs (e.g., LLaMA), RAG architecture, and agent frameworks. Experience with Azure OpenAI, Azure Machine Learning, and related tools in the Microsoft ecosystem. Exposure to smart manufacturing or industrial AI use cases (e.g., computer vision in QA, anomaly detection, supply chain AI). Strong communication skillsable to work with senior management, business, product, and engineering teams. Some background in enterprise architecture (e.g., TOGAF, Zachman framework) is a plus. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Role And Responsibilities Design, develop, and maintain Java-based applications: Create high-volume, low-latency applications for mission-critical systems, ensuring high availability and performance. Collaborate with cross-functional teams: Work with other professionals such as Software Engineers and Web Developers to define, design, and ship new features. Write well-designed, testable, efficient code: Ensure the best possible performance, quality, and responsiveness of applications. Analyze user requirements: Define business objectives and envision system features and functionality. Troubleshoot and resolve technical issues: Identify bottlenecks and bugs, and devise solutions to these problems. Develop documentation: Create detailed design documentation and user guides. Skills Required Proficiency in Java and related frameworks: Strong understanding of Java, Java EE, Spring Boot, and other associated technologies. Experience with software development lifecycle (SDLC): Knowledge of all phases from concept and design to testing and deployment. Problem-solving skills: Ability to identify and resolve technical issues efficiently. Communication skills: Ability to communicate effectively with team members and end-users to determine their needs. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Overview: As a Senior Software Engineer, you will play a key role in designing, developing, and maintaining our software products. You will work closely with cross-functional teams to deliver high-quality solutions that meet customer needs and support our mission. Your expertise in software development and problem-solving will be essential in driving technological innovation and ensuring the success of our products. Key Responsibilities: Software Development: Design, develop, test, and maintain high-quality software solutions using modern programming languages and frameworks. System Architecture: Contribute to the design and architecture of scalable, reliable, and secure software systems. Code Quality: Ensure code quality through code reviews, automated testing, and adherence to best practices and coding standards. Collaboration: Work closely with product management, design, and other engineering teams to understand requirements and deliver solutions that meet customer needs. Mentorship: Mentor and guide junior software engineers, fostering a culture of continuous learning and improvement. Continuous Improvement: Participate in continuous improvement processes, identifying opportunities to enhance software performance, scalability, and maintainability. Documentation: Create and maintain technical documentation, including design documents, API documentation, and user guides. Problem-Solving: Troubleshoot and resolve complex technical issues, providing timely and effective solutions. Experience/Skills: Core Backend Technologies: Expertise in languages and frameworks such as Python, NodeJs, Java, Spring, Go, Django, Flask, Iris, Apache Flink. Complex System Development: Proven track record of developing and managing complex backend modules like job managers, schedulers, and other distributed systems components. API Development: Deep experience in building scalable, low latency RESTful APIs. Database Expertise: Strong knowledge of relational and NoSQL databases, including PostgreSQL, InfluxDB, MongoDB, with skills in design and optimization. Solid Computer Science Fundamentals: Mastery in data structures, algorithms, and OOP concepts. Cloud and DevOps Proficiency: Extensive experience with cloud technologies (AWS, Azure, GCP), and proficient in using Docker, CI/CD pipelines, and cloud-based architecture. Testing and Quality Assurance: Skilled in writing comprehensive unit tests and ensuring code quality and reliability. Advanced Technology Knowledge: Familiarity with IoT, Big Data, Machine Learning is a plus. Experience with message brokers (RabbitMQ), task queues (Celery), and an understanding of frontend technologies is advantageous. Operating System Knowledge: Comfortable working across Windows, UNIX, and Mac OS. Communication Skills: Excellent communication abilities, effective in team collaboration and in explaining complex technical concepts to non-technical stakeholders. Technical Skills: Proficiency in programming languages such as Python, Java, JavaScript, or Go. Experience with backend technologies such as Node.js, Django, Spring Boot, or Flask. Strong understanding of database technologies, including PostgreSQL, MySQL, and NoSQL databases like MongoDB. Knowledge of cloud platforms like AWS, Azure, or Google Cloud. Experience with microservices architecture, containerization (Docker, Kubernetes), and CI/CD pipelines. Familiarity with frontend technologies and frameworks such as React, Angular, or Vue.js is a plus. Problem-Solving Skills: Strong analytical and problem-solving skills, with a focus on delivering high-quality solutions. Collaboration: Excellent verbal and written communication skills, with the ability to effectively collaborate with cross-functional teams. Agile Methodologies: Experience working in Agile/Scrum development environments. Ideal Candidate: Innovative Thinker: Passionate about technology and innovation, with a track record of driving technological advancements. Detail-Oriented: Pays close attention to detail and ensures high-quality deliverables. Team Player: Works effectively with cross-functional teams and fosters a collaborative environment. Customer Focused: Committed to understanding and meeting the needs of customers. Qualifications: Educational Background: Bachelor's degree in Computer Science, Engineering, or a related field. An advanced degree is a plus. Experience: 5+ years of experience in software development, with a strong focus on backend development. Lets connect on LinkedIn - www.linkedin.com/in/aneeshkjain Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Overview We are seeking a highly skilled and motivated Senior Data Scientist with deep expertise in Generative AI , Machine Learning , Deep Learning , and advanced Data Analytics . The ideal candidate will have hands-on experience in building, deploying, and maintaining end-to-end ML solutions at scale, preferably within the Telecom domain. You will be part of our AI & Data Science team, working on high-impact projects ranging from customer analytics, network intelligence, churn prediction, to generative AI applications in telco automation and customer experience. Key Responsibilities Design, develop, and deploy advanced machine learning and deep learning models for Telco use cases such as: Network optimization Customer churn prediction Usage pattern modeling Fraud detection GenAI applications (e.g., personalized recommendations, customer service automation) Lead the design and implementation of Generative AI solutions (LLMs, transformers, text-to-text/image models) using tools like OpenAI, Hugging Face, LangChain, etc. Collaborate with cross-functional teams including network, marketing, IT, and business to define AI-driven solutions. Perform exploratory data analysis, feature engineering, model selection, and evaluation using real-world telecom datasets (structured and unstructured). Drive end-to-end ML solution deployment into production (CI/CD pipelines, model monitoring, scalability). Optimize model performance and latency in production, especially for real-time and edge applications. Evaluate and integrate new tools, platforms, and AI frameworks to advance Vi’s data science capabilities. Provide technical mentorship to junior data scientists and data engineers. Required Qualifications & Skills 8+ years of industry experience in Machine Learning, Deep Learning, and Advanced Analytics. Strong hands-on experience with GenAI models and frameworks (e.g., GPT, BERT, Llama, LangChain, RAG pipelines). Proficiency in Python, and libraries such as scikit-learn, TensorFlow, PyTorch, Hugging Face Transformers, etc. Experience in end-to-end model lifecycle management, from data preprocessing to production deployment (MLOps). Familiarity with cloud platforms like AWS, GCP, or Azure; and ML deployment tools (Docker, Kubernetes, MLflow, FastAPI, etc.). Strong understanding of SQL, big data tools (Spark, Hive), and data pipelines. Excellent problem-solving skills with a strong analytical mindset and business acumen. Prior experience working on Telecom datasets or use cases is a strong plus. Preferred Skills Experience with vector databases, embeddings, and retrieval-augmented generation (RAG) pipelines. Exposure to real-time ML inference and streaming data platforms (Kafka, Flink). Knowledge of network analytics, geo-spatial modeling, or customer behavior modeling in a Telco environment. Experience mentoring teams or leading small AI/ML projects. Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Role Overview We are seeking a highly skilled and motivated Senior Data Scientist with deep expertise in Generative AI , Machine Learning , Deep Learning , and advanced Data Analytics . The ideal candidate will have hands-on experience in building, deploying, and maintaining end-to-end ML solutions at scale, preferably within the Telecom domain. You will be part of our AI & Data Science team, working on high-impact projects ranging from customer analytics, network intelligence, churn prediction, to generative AI applications in telco automation and customer experience. Key Responsibilities Design, develop, and deploy advanced machine learning and deep learning models for Telco use cases such as: Network optimization Customer churn prediction Usage pattern modeling Fraud detection GenAI applications (e.g., personalized recommendations, customer service automation) Lead the design and implementation of Generative AI solutions (LLMs, transformers, text-to-text/image models) using tools like OpenAI, Hugging Face, LangChain, etc. Collaborate with cross-functional teams including network, marketing, IT, and business to define AI-driven solutions. Perform exploratory data analysis, feature engineering, model selection, and evaluation using real-world telecom datasets (structured and unstructured). Drive end-to-end ML solution deployment into production (CI/CD pipelines, model monitoring, scalability). Optimize model performance and latency in production, especially for real-time and edge applications. Evaluate and integrate new tools, platforms, and AI frameworks to advance Vi’s data science capabilities. Provide technical mentorship to junior data scientists and data engineers. Required Qualifications & Skills 8+ years of industry experience in Machine Learning, Deep Learning, and Advanced Analytics. Strong hands-on experience with GenAI models and frameworks (e.g., GPT, BERT, Llama, LangChain, RAG pipelines). Proficiency in Python, and libraries such as scikit-learn, TensorFlow, PyTorch, Hugging Face Transformers, etc. Experience in end-to-end model lifecycle management, from data preprocessing to production deployment (MLOps). Familiarity with cloud platforms like AWS, GCP, or Azure; and ML deployment tools (Docker, Kubernetes, MLflow, FastAPI, etc.). Strong understanding of SQL, big data tools (Spark, Hive), and data pipelines. Excellent problem-solving skills with a strong analytical mindset and business acumen. Prior experience working on Telecom datasets or use cases is a strong plus. Preferred Skills Experience with vector databases, embeddings, and retrieval-augmented generation (RAG) pipelines. Exposure to real-time ML inference and streaming data platforms (Kafka, Flink). Knowledge of network analytics, geo-spatial modeling, or customer behavior modeling in a Telco environment. Experience mentoring teams or leading small AI/ML projects. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We are seeking a skilled Associate Manager - AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We are seeking a skilled Associate Manager - AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Overview We are seeking a skilled Associate Manager - AIOps & MLOps Operations to support and enhance the automation, scalability, and reliability of AI/ML operations across the enterprise. This role requires a solid understanding of AI-driven observability, machine learning pipeline automation, cloud-based AI/ML platforms, and operational excellence. The ideal candidate will assist in deploying AI/ML models, ensuring continuous monitoring, and implementing self-healing automation to improve system performance, minimize downtime, and enhance decision-making with real-time AI-driven insights. Support and maintain AIOps and MLOps programs, ensuring alignment with business objectives, data governance standards, and enterprise data strategy. Assist in implementing real-time data observability, monitoring, and automation frameworks to enhance data reliability, quality, and operational efficiency. Contribute to developing governance models and execution roadmaps to drive efficiency across data platforms, including Azure, AWS, GCP, and on-prem environments. Ensure seamless integration of CI/CD pipelines, data pipeline automation, and self-healing capabilities across the enterprise. Collaborate with cross-functional teams to support the development and enhancement of next-generation Data & Analytics (D&A) platforms. Assist in managing the people, processes, and technology involved in sustaining Data & Analytics platforms, driving operational excellence and continuous improvement. Support Data & Analytics Technology Transformations by ensuring proactive issue identification and the automation of self-healing capabilities across the PepsiCo Data Estate. Responsibilities Support the implementation of AIOps strategies for automating IT operations using Azure Monitor, Azure Log Analytics, and AI-driven alerting. Assist in deploying Azure-based observability solutions (Azure Monitor, Application Insights, Azure Synapse for log analytics, and Azure Data Explorer) to enhance real-time system performance monitoring. Enable AI-driven anomaly detection and root cause analysis (RCA) by collaborating with data science teams using Azure Machine Learning (Azure ML) and AI-powered log analytics. Contribute to developing self-healing and auto-remediation mechanisms using Azure Logic Apps, Azure Functions, and Power Automate to proactively resolve system issues. Support ML lifecycle automation using Azure ML, Azure DevOps, and Azure Pipelines for CI/CD of ML models. Assist in deploying scalable ML models with Azure Kubernetes Service (AKS), Azure Machine Learning Compute, and Azure Container Instances. Automate feature engineering, model versioning, and drift detection using Azure ML Pipelines and MLflow. Optimize ML workflows with Azure Data Factory, Azure Databricks, and Azure Synapse Analytics for data preparation and ETL/ELT automation. Implement basic monitoring and explainability for ML models using Azure Responsible AI Dashboard and InterpretML. Collaborate with Data Science, DevOps, CloudOps, and SRE teams to align AIOps/MLOps strategies with enterprise IT goals. Work closely with business stakeholders and IT leadership to implement AI-driven insights and automation to enhance operational decision-making. Track and report AI/ML operational KPIs, such as model accuracy, latency, and infrastructure efficiency. Assist in coordinating with cross-functional teams to maintain system performance and ensure operational resilience. Support the implementation of AI ethics, bias mitigation, and responsible AI practices using Azure Responsible AI Toolkits. Ensure adherence to Azure Information Protection (AIP), Role-Based Access Control (RBAC), and data security policies. Assist in developing risk management strategies for AI-driven operational automation in Azure environments. Prepare and present program updates, risk assessments, and AIOps/MLOps maturity progress to stakeholders as needed. Support efforts to attract and build a diverse, high-performing team to meet current and future business objectives. Help remove barriers to agility and enable the team to adapt quickly to shifting priorities without losing productivity. Contribute to developing the appropriate organizational structure, resource plans, and culture to support business goals. Leverage technical and operational expertise in cloud and high-performance computing to understand business requirements and earn trust with stakeholders. Qualifications 5+ years of technology work experience in a global organization, preferably in CPG or a similar industry. 5+ years of experience in the Data & Analytics field, with exposure to AI/ML operations and cloud-based platforms. 5+ years of experience working within cross-functional IT or data operations teams. 2+ years of experience in a leadership or team coordination role within an operational or support environment. Experience in AI/ML pipeline operations, observability, and automation across platforms such as Azure, AWS, and GCP. Excellent Communication: Ability to convey technical concepts to diverse audiences and empathize with stakeholders while maintaining confidence. Customer-Centric Approach: Strong focus on delivering the right customer experience by advocating for customer needs and ensuring issue resolution. Problem Ownership & Accountability: Proactive mindset to take ownership, drive outcomes, and ensure customer satisfaction. Growth Mindset: Willingness and ability to adapt and learn new technologies and methodologies in a fast-paced, evolving environment. Operational Excellence: Experience in managing and improving large-scale operational services with a focus on scalability and reliability. Site Reliability & Automation: Understanding of SRE principles, automated remediation, and operational efficiencies. Cross-Functional Collaboration: Ability to build strong relationships with internal and external stakeholders through trust and collaboration. Familiarity with CI/CD processes, data pipeline management, and self-healing automation frameworks. Strong understanding of data acquisition, data catalogs, data standards, and data management tools. Knowledge of master data management concepts, data governance, and analytics. Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
India
Remote
What You Can Expect Zoom is looking for an enthusiastic and experienced Senior Java Software Engineer with an acumin for the operational side of the business too . This DTO Cloud Native Engineering Team builds channel self-service platforms (portal and open APIs) for distributors and resellers. This position will be dynamic and challenging. The ideal candidate will share our passion for building great software in a agile, nimble, customer enterprise application organization. About The Team This engineering position would play a pivotal role in architecting, designing, building and supporting the full-stack cloud-native solutions to address the channels business enablement targets. This includes the self-service experience supporting quoting and ordering for Zoom’s partner ecosystem. These range from software development and machine learning to quality assurance teams that work to create and maintain Zoom's user-friendly interfaces and robust infrastructure. If you are excited about the potential of leading Zoom’s continued evolution into a customer-obsessed enterprise application organization, then this role is for you! What We’re Looking For Have Senior technical lead delivery (design and coding), lead projects and development efforts experience. 12+ years expertise in designing and developing high quality, scalable and secure server-side functionalities. Using Java, Spring MVC, Spring Boot with SQL and NoSQL database technologies, hosted in AWS. Have 10 years experience with relational and NoSQL databases like MySQL, MongoDB, AWS DynamoDB etc. Have 6 years experience with AWS or other cloud platforms. Have experience building restful APIs and microservices services and understands the nuances of exposing APIs for public consumption or to integrate them with frontend applications. Build RESTful APIs and microservices and understand exposing APIs for public consumption or to integrate with frontend. Have experience in building low latency APIs. Responsibilities Acting as a Technical lead with ability to drive end-to-end technical delivery of cloud-native software projects. Architecting and implementing efficient, modular and reusable software components and systems. Developing high quality, scalable and secure functionalities using Java, Spring MVC, Spring Boot with SQL and NoSQL. Building customer enterprise applications. Identifying, implementing and managing code libraries that minimize repetitive code and improve application design. Using code optimisation techniques to improve robustness and performance of software solutions, Building API that will be integrated with partner system. Troubleshooting customer issues and provide technical support to resolving issues in production environment. #India #Remote Ways of Working Our structured hybrid approach is centered around our offices and remote work environments. The work style of each role, Hybrid, Remote, or In-Person is indicated in the job description/posting. Benefits As part of our award-winning workplace culture and commitment to delivering happiness, our benefits program offers a variety of perks, benefits, and options to help employees maintain their physical, mental, emotional, and financial health; support work-life balance; and contribute to their community in meaningful ways. Click Learn for more information. About Us Zoomies help people stay connected so they can get more done together. We set out to build the best collaboration platform for the enterprise, and today help people communicate better with products like Zoom Contact Center, Zoom Phone, Zoom Events, Zoom Apps, Zoom Rooms, and Zoom Webinars. We’re problem-solvers, working at a fast pace to design solutions with our customers and users in mind. Find room to grow with opportunities to stretch your skills and advance your career in a collaborative, growth-focused environment. Our Commitment At Zoom, we believe great work happens when people feel supported and empowered. We’re committed to fair hiring practices that ensure every candidate is evaluated based on skills, experience, and potential. If you require an accommodation during the hiring process, let us know—we’re here to support you at every step. If you need assistance navigating the interview process due to a medical disability, please submit an Accommodations Request Form and someone from our team will reach out soon. This form is solely for applicants who require an accommodation due to a qualifying medical disability. Non-accommodation-related requests, such as application follow-ups or technical issues, will not be addressed. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
New Delhi, Delhi, India
On-site
About Us Astra Capital is a pioneering quantitative trading firm that transforms data and sophisticated algorithms into market-beating strategies. Our team embraces the rapid pace of today’s financial landscape, constantly refining our systems for maximum speed, reliability, and edge. We’re looking for a talented Software Developer proficient in C++/Rust to architect and optimize the ultra–low-latency infrastructure that powers our trading operations. Position Overview As a Senior Software Developer, you will play a pivotal role in designing, developing, and optimizing trading systems and infrastructure. You will work directly with quantitative researchers, traders, and other engineers to build and maintain technology solutions that drive our trading strategies. This is a unique opportunity to work at the intersection of technology and finance. Key Responsibilities System Design & Implementation: Create and enhance high-frequency trading engines in C++/Rust that span multiple venues. Performance Tuning: Conduct algorithmic research, profiling, and benchmarking to drive continuous latency improvements. Tooling & Observability: Build dashboards, alerting, and administrative interfaces for real-time system visibility. Full-Lifecycle Ownership: Lead requirements gathering, coding, testing, deployment, and maintenance of core modules. Cross-Functional Collaboration: Work closely with quants, traders, and fellow engineers to align technology with strategy. Innovation & Growth: Stay current on language features, libraries, and best practices in systems programming and fintech. Qualifications Must Have: Academic Background: BS/MS in Computer Science, Engineering, or related discipline Professional Experience: 3–5 years in C++/Rust development, object-oriented design. Systems Knowledge: Data structures & Algorithms, distributed/low-latency systems, multi-threading, parallel computing Coding Excellence: Clean, efficient, maintainable code with comprehensive documentation Operational Skills: Linux proficiency, networking fundamentals, production debugging Soft Skills: Analytical mindset, meticulous attention to detail, strong teamwork and communication Good To Have: Financial technology or HFT industry background Insight into market microstructure, trading protocols, TCP/IP (other network technologies) Python for scripting; experience with other languages (Go, Java, R, Bash) Experience with big data technologies such as Hadoop/Spark. Why Join Astra Capital? Exceptional opportunity to work with a dedicated, talented, and driven team. Opportunity to be a foundational member of a growing quantitative fund. Attractive base salary and benefits package, including performance-based bonuses. Outstanding company culture: relaxed, highly professional, innovative, and entrepreneurial. Leverage the growth of the Indian market to generate exponential value for yourself. Astra Capital is where serious tech meets epic fun. Help us build lightning-fast trading systems, enjoy competitive pay and killer bonuses, and high-five your team as we conquer India’s hot markets together. Ready to make waves? Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Gurugram, Haryana, India
Remote
Roles & Responsibilities Core Skills Key Responsibilities Lead OT architecture design for mining operations, including drilling, hauling, crushing, stockpiling, processing, and transportation. Design hybrid (on-prem + edge + cloud) solutions that handle poor connectivity, latency, and harsh physical environments. Architect integration between OT and enterprise systems (e.g., ERP, fleet management, historian, CMMS). Work closely with business analysts to ensure technical solutions align with operational workflows and business goals. Select and implement appropriate technologies (e.g., MQTT, OPC-UA, edge gateways, cloud connectors). Design OT architectures that enable deployment of AI/ML models at the edge, on-premises, and in the cloud for applications such as predictive maintenance, equipment health monitoring, and operational optimization. Collaborate with data scientists and AI/ML engineers to ensure appropriate data capture, preprocessing, and feature engineering from OT systems (SCADA, PLCs, condition monitoring sensors, etc.). Integrate AI-driven analytics and insights into operator dashboards and IROC visualizations, ensuring actionable intelligence for mining operations. Support model lifecycle considerations in OT environments—edge inferencing, model updates, offline inference, and fail-safe design. Ensure designs are scalable, secure, fault-tolerant, and compliant with industrial security standards (IEC 62443, NIST CSF). Guide the development and deployment of monitoring systems for IROC-based centralized operations. Required Skills And Qualifications 8–10 years of experience in architecting OT solutions for mining operations, particularly in open-pit mining. Deep expertise in OT components: SCADA, PLCs, DCS, telemetry, real-time data acquisition, condition monitoring. Knowledge of industrial communication protocols (OPC UA, Modbus, PROFINET, MQTT). Experience with edge computing platforms (e.g., Azure IoT Edge, AWS IoT Greengrass) and industrial gateways. Proven ability to design solutions tolerant of intermittent connectivity, high latency, and remote geographies. Working knowledge of AI/ML use cases in industrial and mining operations, such as predictive maintenance, equipment utilization analysis, energy optimization, and safety anomaly detection. Familiarity with deployment of AI models on edge platforms (e.g., NVIDIA Jetson, Azure IoT Edge, AWS Greengrass) and their integration with OT systems. Experience designing OT data pipelines to support real-time and historical data needs for AI analytics. Strong understanding of IT/OT convergence principles and industrial cybersecurity. Minimum 3 years of direct experience working with or for a mining organization, with a solid understanding of mining operational challenges. Education Bachelor’s or Master’s in Computer Science, Control Systems Engineering, or relevant fields. Experience 8-11 Years Skills Primary Skill: AI/ML Development Sub Skill(s): AI/ML Development Additional Skill(s): AI/ML Architecture, TensorFlow, Pytorch About The Company Infogain is a human-centered digital platform and software engineering company based out of Silicon Valley. We engineer business outcomes for Fortune 500 companies and digital natives in the technology, healthcare, insurance, travel, telecom, and retail & CPG industries using technologies such as cloud, microservices, automation, IoT, and artificial intelligence. We accelerate experience-led transformation in the delivery of digital platforms. Infogain is also a Microsoft (NASDAQ: MSFT) Gold Partner and Azure Expert Managed Services Provider (MSP). Infogain, an Apax Funds portfolio company, has offices in California, Washington, Texas, the UK, the UAE, and Singapore, with delivery centers in Seattle, Houston, Austin, Kraków, Noida, Gurgaon, Mumbai, Pune, and Bengaluru. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description We are searching for world-class technologists to join our team building solutions for the JPMorgan FX e-trading platform. Candidates must be highly motivated with a track record of success. The ideal candidates will have a deep understanding of algorithms, data structures, system design, threading implications, and strategies for optimizing performance-sensitive code. In this role, a candidate will be building the components that underpin the global macro FX eTrading business working within a team of 20 developers globally and working closely with our trading partners. These components must be highly available, highly scalable, and operate with the lowest possible latency. The systems handle high volumes of real-time data and require careful tuning for performance. Candidates would be accountable for the overall success of the systems, including design, development, deployment, optimization, and day-to-day operation. Job Responsibilities Design, develop, and maintain electronic trading components. Use low-level programming techniques to produce highly optimized, low-latency trading software Second line support backing up the 1st line operate team Required Qualifications Advanced professional Java experience required Relevant markets experience, FX preferred but not essential Scripting skills, python will be an advantage. Strong Linux/Unix, and knowledge of networking topologies, TCP + UDP Low latency middleware, for example: Informatica ultra messaging Ability to work collaboratively in a global team on long-term technical problems Bachelor's degree in Computer Science, Engineering, Maths, Stats, Physics, or similar experience Possess a great attention to detail Ability to analyse and fix problems quickly Capable of working independently as well as part of a team Able to learn quickly and apply new skills effectively About JPMorgan Chase & Co. J.P. Morgan serves one of the largest client franchises in the world. Our clients include corporations, institutional investors, hedge funds, governments and affluent individuals in more than 100 countries. J.P. Morgan is part of JPMorgan Chase & Co. (NYSE: JPM), a leading global financial services firm with assets of $2.2 trillion. The firm is a leader in investment banking, financial services for consumers, small business and commercial banking, financial transaction processing, asset management, and private equity. A component of the Dow Jones Industrial Average, JPMorgan Chase serves millions of clients and consumers under its J.P. Morgan and Chase, and WaMu brands. J.P. Morgan offers an exceptional benefits program and a highly competitive compensation package. J.P. Morgan is an Equal Opportunity Employer. Show more Show less
Posted 1 week ago
1.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Location - Gurgaon The Principal Architect will be responsible for leading the architectural design and planning process for a variety of projects, ensuring that designs meet client specifications, regulatory requirements, and sustainability standards. This role involves good experience in algorithmic-heavy and computationally intensive systems. Responsibilities Improve and/or re-architect and/or write new algorithms for functional performance. Drive product innovation, technology roadmap and provide long-term vision to module teams. Author system-level high level design documents for cross-functional microservices. Work on simulations for breakthrough functional and technical performance. Innovate and dig out patentable solutions to product/technical requirements. Incorporate proper certification/compliance requirements into the product designs. Be a focal point of technology for product, engineering and teams that are critical to product. Participate in strategic planning for the product vision and roadmap. Be involved and pivotal in the company's transformation to a complete SaaS/PaaS product. Lead PoC for new technologies to continuously improve technical platform and developer experience. Must Have Experience in algorithmic-heavy and computationally intensive systems is a must. Proficiency in Java / C++ Strong knowledge of distributed systems. Skilled in low latency queuing systems Experience with major architecture patterns. Well-versed with performance & scalability Can write clean design documents. Qualifications Education: Bachelor’s or master’s degree in computer science, Software Engineering, or a related field from a premier institute. Specialization in Computer-Science certifications are added bonus Over 12 experience in software industry, preferably 1+ years as a senior architect Technical Skills: Proficiency in one or more programming languages such as Java, C++, Python, C#. Experience with frameworks and libraries relevant to the technology stack. Problem-Solving: Strong analytical and troubleshooting skills. Ability to diagnose and resolve complex problems. Good to Have Proficiency in Erlang / Elixir / Scala. Strong mathematical background Exposure to analytics/machine learning Ability to lead and mentor people. Background in microservices-architecture Skilled at thorough REST API designs Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Sanas is revolutionizing the way we communicate with the world’s first real-time algorithm, designed to modulate accents, eliminate background noises, and magnify speech clarity. Pioneered by seasoned startup founders with a proven track record of creating and steering multiple unicorn companies, our groundbreaking GDP-shifting technology sets a gold standard. Sanas is a 200-strong team, established in 2020. In this short span, we’ve successfully secured over $100 million in funding. Our innovation have been supported by the industry’s leading investors, including Insight Partners, Google Ventures, Quadrille Capital, General Catalyst, Quiet Capital, and other influential investors. Our reputation is further solidified by collaborations with numerous Fortune 100 companies. With Sanas, you’re not just adopting a product; you’re investing in the future of communication. We’re looking for a sharp, hands-on Data Engineer to help us build and scale the data infrastructure that powers cutting-edge audio and speech AI products. You’ll be responsible for designing robust pipelines, managing high-volume audio data, and enabling machine learning teams to access the right data — fast. As one of the first dedicated data engineers on the team, you'll play a foundational role in shaping how we handle data end-to-end, from ingestion to training-ready features. You'll work closely with ML engineers, research scientists, and product teams to ensure data is clean, accessible, and structured for experimentation and production. Key Responsibilities : Build scalable, fault-tolerant pipelines for ingesting, processing, and transforming large volumes of audio and metadata Design and maintain ETL workflows for training and evaluating ML models, using tools like Airflow or custom pipelines Collaborate with ML research scientists to make raw and derived audio features (e.g., spectrograms, MFCCs) efficiently available for training and inference Manage and organize datasets, including labeling workflows, versioning, annotation pipelines, and compliance with privacy policies Implement data quality, observability, and validation checks across critical data pipelines Help optimize data storage and compute strategies for large-scale training Qualifications : 2–5 years of experience as a Data Engineer, Software Engineer, or similar role with a focus on data infrastructure Proficient in Python, SQL, and working with distributed data processing tools (e.g., Spark, Dask, Beam) Experience with cloud data infrastructure (AWS/GCP), object storage (e.g.,S3), and data orchestration tools Familiarity with audio data and its unique challenges (large file sizes, time-series features, metadata handling) is a strong plus Comfortable working in a fast-paced, iterative startup environment where systems are constantly evolving Strong communication skills and a collaborative mindset — you’ll be working cross-functionally with ML, infra, and product teams Nice to Have : Experience with data for speech models like ASR, TTS, or speaker verification Knowledge of real-time data processing (e.g., Kafka, WebSockets, or low-latency APIs) Background in MLOps, feature engineering, or supporting model lifecycle workflows Experience with labeling tools, audio annotation platforms, or human-in-the-loop systems Joining us means contributing to the world’s first real-time speech understanding platform revolutionizing Contact Centers and Enterprises alike. Our technology empowers agents, transforms customer experiences, and drives measurable growth. But this is just the beginning. You'll be part of a team exploring the vast potential of an increasingly sonic future Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka
On-site
BENGALURU, KARNATAKA, INDIA FULL-TIME HARDWARE ENGINEERING 3506 Waymo is an autonomous driving technology company with the mission to be the most trusted driver. Since its start as the Google Self-Driving Car Project in 2009, Waymo has focused on building the Waymo Driver—The World's Most Experienced Driver™—to improve access to mobility while saving thousands of lives now lost to traffic crashes. The Waymo Driver powers Waymo One, a fully autonomous ride-hailing service, and can also be applied to a range of vehicle platforms and product use cases. The Waymo Driver has provided over one million rider-only trips, enabled by its experience autonomously driving tens of millions of miles on public roads and tens of billions in simulation across 13+ U.S. states. Waymo's Compute Team is tasked with a critical and exciting mission: We deliver the compute platform responsible for running the fully autonomous vehicle's software stack. To achieve our mission, we architect and create high-performance custom silicon; we develop system-level compute architectures that push the boundaries of performance, power, and latency; and we collaborate closely with many other teammates to ensure we design and optimize hardware and software for maximum performance. We are a multidisciplinary team seeking curious and talented teammates to work on one of the world's highest performance automotive compute platforms. In this hybrid role, you will report to an ASIC Design Manager. You will: Manage a new team of engineers developing advanced silicon for our self-driving cars Grow the team by hiring top talent at our new site in Bangalore Hands on technical leadership and contributions to architecture, design, and verification of IP blocks Work and coordinate cross-functionally with our U.S. and Taiwan silicon and partner teams Develop methodologies and best practices to ensure on-time, high performance, and high-quality silicon You have: 6+ years experience managing ASIC or SoC development teams Strong technical experience with the full digital design and verification cycle - from spec through bring-up 5+ years of industry experience with high performance digital design in Verilog/SystemVerilog Experience prioritizing resources across multiple projects on tight timelines We prefer: Industry experience with constrained random verification and UVM Fluency in at least one high level programming language such as Python, C++ Experience with performance and power validation, and formal verification Experience with prototyping systems on FPGA platforms or emulators Experience with automotive silicon and standards The expected base salary range for this full-time position is listed below. Actual starting pay will be based on job-related factors, including exact work location, experience, relevant training and education, and skill level. Waymo employees are also eligible to participate in Waymo’s discretionary annual bonus program, equity incentive plan, and generous Company benefits program, subject to eligibility requirements. Salary Range ₹8,400,000—₹10,200,000 INR
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Us Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic, a rapidly growing Forbes Cloud 100 company, is the global leader in enablement, helping organizations engage customers, enable teams, and ignite revenue growth. The Seismic Enablement CloudTM provides continuous guidance to improve behavior, content, and skills to win more deals and deliver better experiences. More than 2,200 organizations around the globe including IBM and American Express have made Seismic their enablement platform of choice. Seismic integrates with business-critical platforms including Microsoft, Salesforce, Google and Adobe. Seismic is headquartered in San Diego, with offices across North America, Europe, Australia and China. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here. Overview Join us at Seismic, a cutting-edge technology company leading the way in the SaaS industry. We specialize in delivering modern, scalable, and multi-cloud solutions that empower businesses to succeed in today’s digital era. Leveraging the latest advancements in technology, including Generative AI, we are committed to driving innovation and transforming the way businesses operate. As we embark on an exciting journey of growth and expansion, we are seeking top engineering talent to join our AI team in Hyderabad, India. AI is one of the fastest growing product areas in Seismic. We believe that AI, particularly Generative AI, will empower and transform how Enterprise sales and marketing organizations operate and interact with customers. Seismic Aura, our leading AI engine, is powering this change in the sales enablement space and is being infused across the Seismic enablement cloud. Our focus is to leverage AI across the Seismic platform to make our customers more productive and efficient in their day-to-day tasks, and to drive more successful sales outcomes. Who You Are If you are a passionate technologist with a strong track record of building AI products, and you thrive in a fast-paced, innovative environment, we want to hear from you! Opportunity to be a key technical leader in a rapidly growing company and drive innovation in the SaaS industry. Work with cutting-edge technologies and be at the forefront of AI advancements. Competitive compensation package, including salary, bonus, and equity options. A supportive, inclusive work culture. Professional development opportunities and career growth potential in a dynamic and collaborative environment. What You’ll Be Doing Distributed Systems Development: Design, develop, and maintain backend systems and services for AI, information extraction or information retrieval functionality, ensuring high performance, scalability, and reliability. Algorithm Optimization: Implement and optimize AI-driven semantic algorithms, indexing, and information retrieval techniques to enhance search accuracy and efficiency. Integration: Collaborate with data scientists, AI engineers, and product teams to integrate AI-driven capabilities across the Seismic platform. Performance Tuning: Monitor and optimize service performance, addressing bottlenecks and ensuring low-latency query responses. Technical Leadership: Provide technical guidance and mentorship to junior engineers, promoting best practices in backend development. Collaboration: Work closely with cross-functional and geographically distributed teams, including product managers, frontend engineers, and UX designers, to deliver seamless and intuitive experiences. Continuous Improvement: Stay updated with the latest trends and advancements in software and technologies, conducting research and experimentation to drive innovation. What You Bring To The Team Experience: 7+ years of experience in software engineering and a proven track record of building and scaling microservices and working with data retrieval systems. Technical Expertise: Experience with C# and .NET, unit testing, object-oriented programming, and relational databases. Experience with Infrastructure as Code (Terraform, Pulumi, etc.), event driven architectures with tools like Kafka, feature management (Launch Darkly) is good to have. Front-end/full stack experience a plus. Cloud Expertise: Experience with cloud platforms like AWS, Google Cloud Platform (GCP), or Microsoft Azure. Knowledge of cloud-native services for AI/ML, data storage, and processing. Experience deploying containerized applications into Kubernetes is a plus. AI: Proficiency in building and deploying Generative AI use cases is a plus. Experience with Natural Language Processing (NLP), semantic search with platforms like ElasticSearch is a plus. SaaS Knowledge: Extensive experience in SaaS application development and cloud technologies, with a deep understanding of modern distributed system and cloud operational infrastructure. Product Development: Experience in collaborating with product management and design, with the ability to translate business requirements into technical solutions that drive successful delivery. Proven record of driving feature development from concept to launch. Education: Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Fast-paced Environment: Experience working in a fast-paced, dynamic environment, preferably in a SaaS or technology-driven company. What We Have For You At Seismic, we’re committed to providing benefits and perks for the whole self. To explore our benefits available in each country, please visit the Global Benefits page. If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice. Show more Show less
Posted 1 week ago
25.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Tower Research Capital is a leading quantitative trading firm founded in 1998. Tower has built its business on a high-performance platform and independent trading teams. We have a 25+ year track record of innovation and a reputation for discovering unique market opportunities. Tower is home to some of the world’s best systematic trading and engineering talent. We empower portfolio managers to build their teams and strategies independently while providing the economies of scale that come from a large, global organization. Engineers thrive at Tower while developing electronic trading infrastructure at a world class level. Our engineers solve challenging problems in the realms of low-latency programming, FPGA technology, hardware acceleration and machine learning. Our ongoing investment in top engineering talent and technology ensures our platform remains unmatched in terms of functionality, scalability and performance. At Tower, every employee plays a role in our success. Our Business Support teams are essential to building and maintaining the platform that powers everything we do — combining market access, data, compute, and research infrastructure with risk management, compliance, and a full suite of business services. Our Business Support teams enable our trading and engineering teams to perform at their best. At Tower, employees will find a stimulating, results-oriented environment where highly intelligent and motivated colleagues inspire each other to reach their greatest potential. Responsibilities: Trading and process monitoring, as well as PNL reporting; ad-hoc exception handling and query management Affirmation of trades, minimizing operational risk and maximizing straight through processing Reconciling of position, trades, cash, NAV, and fees Supporting clearing and settlement processes across multiple asset classes: Equity, Fixed Income, Derivatives, FX, and Commodities Investigating breaks raised by trading desks, other departments, and counterparties Assisting in automation efforts and process improvements Liaising with Prime Brokers and service vendors for troubleshooting and improvements Owning and resolving specific inquiries and ensure timely resolution Participating in local & global operations, audit, and valuation projects Qualifications: A degree in Finance, Economics, Computer Science, or other related fields Mastery of financial concepts with direct experience working with global markets 1 to 5 years of experience in trade support or post trade operations at a Brokerage, Asset Manager, Investment Bank, or Proprietary Trading Firm DevOps experience is beneficial Aptitude and desire to learn new things and quickly adapt Great analytical and communication skills High level MS Excel knowledge is a must: Comfortable with Look-ups, Pivot Tables, and Conditional Statements Interest in post trade infrastructure Ability to work with large scale data reconciliations is a must Python or SQL experience a plus A master's in finance or a CFA/Certification in Finance would be an added advantage Benefits: Tower’s headquarters are in the historic Equitable Building, right in the heart of NYC’s Financial District and our impact is global, with over a dozen offices around the world. At Tower, we believe work should be both challenging and enjoyable. That is why we foster a culture where smart, driven people thrive – without the egos. Our open concept workplace, casual dress code, and well-stocked kitchens reflect the value we place on a friendly, collaborative environment where everyone is respected, and great ideas win. Our benefits include: Generous paid time off policies Savings plans and other financial wellness tools available in each region Hybrid working opportunities Free breakfast, lunch and snacks daily In-office wellness experiences and reimbursement for select wellness expenses (e.g., gym, personal training and more) Volunteer opportunities and charitable giving Social events, happy hours, treats and celebrations throughout the year Workshops and continuous learning opportunities At Tower, you’ll find a collaborative and welcoming culture, a diverse team and a workplace that values both performance and enjoyment. No unnecessary hierarchy. No ego. Just great people doing great work – together. Tower Research Capital is an equal opportunity employer. Show more Show less
Posted 1 week ago
25.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Tower Research Capital is a leading quantitative trading firm founded in 1998. Tower has built its business on a high-performance platform and independent trading teams. We have a 25+ year track record of innovation and a reputation for discovering unique market opportunities. Tower is home to some of the world’s best systematic trading and engineering talent. We empower portfolio managers to build their teams and strategies independently while providing the economies of scale that come from a large, global organization. Engineers thrive at Tower while developing electronic trading infrastructure at a world class level. Our engineers solve challenging problems in the realms of low-latency programming, FPGA technology, hardware acceleration and machine learning. Our ongoing investment in top engineering talent and technology ensures our platform remains unmatched in terms of functionality, scalability and performance. At Tower, every employee plays a role in our success. Our Business Support teams are essential to building and maintaining the platform that powers everything we do — combining market access, data, compute, and research infrastructure with risk management, compliance, and a full suite of business services. Our Business Support teams enable our trading and engineering teams to perform at their best. At Tower, employees will find a stimulating, results-oriented environment where highly intelligent and motivated colleagues inspire each other to reach their greatest potential. Responsibilities: Trading and process monitoring, as well as PNL reporting; ad-hoc exception handling and query management Affirmation of trades, minimizing operational risk and maximizing straight through processing Reconciling of position, trades, cash, NAV, and fees Supporting clearing and settlement processes across multiple asset classes: Equity, Fixed Income, Derivatives, FX, and Commodities Investigating breaks raised by trading desks, other departments, and counterparties Assisting in automation efforts and process improvements Liaising with Prime Brokers and service vendors for troubleshooting and improvements Owning and resolving specific inquiries and ensure timely resolution Participating in local & global operations, audit, and valuation projects Qualifications: A degree in Finance, Economics, Computer Science, or other related fields Mastery of financial concepts with direct experience working with global markets 5+ years of experience in trade support or post trade operations at a Brokerage, Asset Manager, Investment Bank, or Proprietary Trading Firm DevOps experience is beneficial Aptitude and desire to learn new things and quickly adapt Great analytical and communication skills High level MS Excel knowledge is a must: Comfortable with Look-ups, Pivot Tables, and Conditional Statements Interest in post trade infrastructure Ability to work with large scale data reconciliations is a must Python or SQL experience a plus A master's in finance or a CFA/Certification in Finance would be an added advantage Benefits: Tower’s headquarters are in the historic Equitable Building, right in the heart of NYC’s Financial District and our impact is global, with over a dozen offices around the world. At Tower, we believe work should be both challenging and enjoyable. That is why we foster a culture where smart, driven people thrive – without the egos. Our open concept workplace, casual dress code, and well-stocked kitchens reflect the value we place on a friendly, collaborative environment where everyone is respected, and great ideas win. Our benefits include: Generous paid time off policies Savings plans and other financial wellness tools available in each region Hybrid working opportunities Free breakfast, lunch and snacks daily In-office wellness experiences and reimbursement for select wellness expenses (e.g., gym, personal training and more) Volunteer opportunities and charitable giving Social events, happy hours, treats and celebrations throughout the year Workshops and continuous learning opportunities At Tower, you’ll find a collaborative and welcoming culture, a diverse team and a workplace that values both performance and enjoyment. No unnecessary hierarchy. No ego. Just great people doing great work – together. Tower Research Capital is an equal opportunity employer. Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Job Description Job Summary: As a Principal Software Engineer at JPMorgan Chase within the Technology Department, you provide expertise and engineering excellence as an integral part of an agile team to enhance, build, and deliver trusted market-leading technology products in a secure, stable, and scalable way. Leverage your advanced technical capabilities and collaborate with colleagues across the organization to promote best-in-class outcomes across various technologies to support one or more of the firm’s portfolios. Job Responsibilities Creates complex and scalable coding frameworks using appropriate software design frameworks Develops secure and high-quality production code, and reviews and debugs code written by others Advises cross-functional teams on technological matters within your domain of expertise Serves as the function’s go-to subject matter expert Contributes to the development of technical methods in specialized fields in line with the latest product development methodologies Creates durable, reusable software frameworks that are leveraged across teams and functions Influences leaders and senior stakeholders across business, product, and technology teams Champions the firm’s culture of diversity, equity, inclusion, and respect Required Qualifications, Capabilities, And Skills Formal training or certification on Enterprise Java systems concepts and 10+ years applied experience 12+ years of experience in Enterprise Java systems. Proficiency in micro-services, eventing, SRE concepts, Agile Methodology, AI powered development assist tool, cloud computing, AWS, CI/CD pipeline, security & authentication Hands-on practical experience delivering system design, application development, testing, and operational stability Expert in one or more programming language(s), Java Experience applying expertise and new methods to determine solutions for complex technology problems in one or more technical disciplines Experience leading a product as a Lead Engineer and working with product and design. Ability to present and effectively communicate with Senior Leaders and Executives Understanding of the financial domain. Experience in supporting and maintaining low latency systems. About Us JPMorganChase, one of the oldest financial institutions, offers innovative financial solutions to millions of consumers, small businesses and many of the world’s most prominent corporate, institutional and government clients under the J.P. Morgan and Chase brands. Our history spans over 200 years and today we are a leader in investment banking, consumer and small business banking, commercial banking, financial transaction processing and asset management. We recognize that our people are our strength and the diverse talents they bring to our global workforce are directly linked to our success. We are an equal opportunity employer and place a high value on diversity and inclusion at our company. We do not discriminate on the basis of any protected attribute, including race, religion, color, national origin, gender, sexual orientation, gender identity, gender expression, age, marital or veteran status, pregnancy or disability, or any other basis protected under applicable law. We also make reasonable accommodations for applicants’ and employees’ religious practices and beliefs, as well as mental health or physical disability needs. Visit our FAQs for more information about requesting an accommodation. About The Team J.P. Morgan Asset & Wealth Management delivers industry-leading investment management and private banking solutions. Asset Management provides individuals, advisors and institutions with strategies and expertise that span the full spectrum of asset classes through our global network of investment professionals. Wealth Management helps individuals, families and foundations take a more intentional approach to their wealth or finances to better define, focus and realize their goals. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Kolkata, West Bengal, India
On-site
Duration: 1–3 months (extendable or full-time job offer) Stipend: Depends upon event scale and commitment Stipend/Salary: ₹25,000 – ₹50,000 INR/month (depending upon performance, complexity of role, and time committed) Roles/Responsibilities: Event Setup & Technical Operations Support setting up LAN hardware: PCs, routers, monitors, and network switches Coordinate with IT teams to provide low-latency, secure, and uninterrupted network environments Test and handle audio-visual equipment (cameras, mixers, displays) Live Event Support Work with casters, streamers, and observers to ensure uninterrupted broadcast continuity Keep track of real-time production applications (i.e., OBS, vMix, Stream Deck) Control overlays, in-game camera positioning, and transitions Logistics & Coordination Coordinate with tournament admins, player managers, and venue staff to align schedules Ensure real-time communication between all production units Troubleshoot and escalate technical problems during the event Content & Post-Production Support Record gameplay and BTS footage for reels and highlight edits Assistance in coordinating digital assets and production files Best Qualifications (Fresher-Friendly): Esports enthusiasm and good knowledge of gaming culture (FPS, MOBA, Battle Royale, etc.) Primer knowledge of PC hardware, LAN configurations, and internet protocols experience with streaming software (OBS, vMix, XSplit – basics is fine) Good organizational skills, multitasking, and composure under pressure Flexibility to work odd hours, including weekends during tournaments Strong communication and teamwork skills ✅ Bonus Skills (Not Required): Experience working with video editing software (Adobe Premiere Pro, DaVinci Resolve) Practical experience with game titles such as BGMI, Valorant, CS2, Dota 2, etc. Familiarity with Discord moderation, Trello, or Airtable Knowledge of fundamental audio-mixing or camera operations What You'll Gain: Real-world experience in live esports event production Networking with industry professionals Letter of recommendation and portfolio-worthy experience Offer for full-time production or community manager position Show more Show less
Posted 1 week ago
3.0 - 5.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
MindInventory is a leading digital transformation company in Ahmedabad that helps businesses of all sizes adopt cutting-edge technologies to achieve their digital ambitions. We are committed to providing practical tech solutions that drive growth with measurable results. Experience: 3-5 Years Location: Ahmedabad, Gujarat Type: Full time, Work From Office Roles And Responsibilities Write reusable, testable, and efficient code. Design and implement of low-latency, high-availability, and performant applications. Integration of user-facing elements developed by front-end developers with server side logic. Implementation of security and data protection. Integration of data storage solutions. Performance tuning, improvement, balancing, usability, automation. Work collaboratively with design team to understand end user requirements to provide technical solutions and for the implementation of new software features. Good understanding of Python, Django and Flask. Good exposure on python scientific libraries ( Numpy, Pandas, Tensorflow). Strong knowledge on data structures and designing for performance, scalability and availability Knowledge in MongoDB and Web services. Experience in Microservices, Big data technologies will be a plus. Good grasp of algorithms, memory management and multithreaded programming. Good to have - Mysql, Redis, ElasticSearch. Able to fit in well within an informal startup environment and to provide hands-on management. High energy level and untiring commitment to drive oneself & the team towards goals. Expert in Python, with knowledge of at least one Python web framework (such as Django,Flask,etc). Should have done development on Django and other UI web Frameworks. Basic understanding HTML, CSS, JavaScript, JQuery, JS Libraries like AngularJs. Implementing SOAP based and RESTful services. UNIX/LINUX experience is an added advantage. Should have experience in Database & SQL. Good understanding of server-side templating languages such as Jinja 2, Mako, etc. Strong unit test and debugging skills. Proficient understanding of code versioning tools (such as Git, Mercurial or SVN). Skills:- Python, FastAPI and Microservices Show more Show less
Posted 1 week ago
12.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Site Reliability Engineer (DBRE) Job Type: Full-time Level: IC3 About the Role At Freshworks, uptime is sacred. As a Lead Site Reliability Engineer (SRE), you'll be the engineer behind the curtain—designing for resilience, automating recovery, and ensuring our systems stay fast, stable, and observable at scale. You’ll partner closely with engineering, platform, and product teams to shift reliability left and set the standard for performance and availability. If you live for clean telemetry, root cause resolution, and engineering chaos into confidence, this is your playground. Job Description Key Responsibilities Design and implement tools to improve availability, latency, scalability, and system health. Define SLIs/SLOs, manage error budgets, and drive performance engineering efforts. Build and maintain automated monitoring, alerting, and remediation pipelines. Collaborate with engineering teams to improve reliability by design. Lead incident response, root cause analysis, and blameless postmortems. Champion observability across services—logs, metrics, traces. Contribute to infrastructure architecture, automation, and reliability roadmaps. Advocate for SRE best practices across teams and functions. Requirements 7–12 years of experience in SRE, DevOps, or Production Engineering roles. Coding Proficiency : Develop clear, efficient, and well-structured code. Linux Expertise : In-depth knowledge of Linux for system administration and advanced troubleshooting. Containerization & Orchestration : Practical experience with Docker and Kubernetes for application deployment and management. CI/CD Management : Design, implement, and maintain Continuous Integration and Continuous Delivery pipelines. Security & Compliance : Understand security best practices and compliance in infrastructure. High Availability & Scalability : Design and implement highly available, scalable, and resilient distributed systems. Infrastructure as Code (IaC) & Automation : Proficient in IaC tools and automating infrastructure provisioning and management. Disaster Recovery (DR) & High Availability (HA) : Deep knowledge and practical experience with various DR and HA strategies. Observability : Implement and utilize monitoring, logging, and tracing tools for system health. System Design (Distributed Systems) : Design complex distributed systems with a focus on reliability and operations. Problem-Solving & Troubleshooting : Excellent analytical and diagnostic skills for resolving complex system issues. Qualifications Technical Skills & Experience Extensive hands-on experience of 7-12 Years with relational databases (e.g., MySQL, PostgreSQL, SQL Server) and distributed NoSQL systems (e.g., Cassandra, MongoDB, DynamoDB). Proven track record of designing and operating databases in large-scale cloud-native environments (AWS, GCP, Azure). Strong programming skills in Python, Go, or Bash for building infrastructure tooling and automation frameworks. Expertise with Infrastructure as Code (Terraform, Helm, Ansible) and Kubernetes for managing production database systems. Deep knowledge of database replication, clustering, backup/restore, and failover techniques. Advanced experience with observability tooling (Prometheus, Grafana, Datadog, New Relic) for monitoring distributed databases. Strong communication skills and ability to influence across teams and levels. Degree in Computer Science, Engineering, or related field. Experience building and scaling services in production with high uptime targets (99.99%+). Clear track record of reducing incident frequency and improving response metrics (MTTD/MTTR). Strong communicator who thrives in high-pressure environments. Passionate about automation, chaos engineering, and making things just work. Additional Information At Freshworks, we are creating a global workplace that enables everyone to find their true potential, purpose, and passion irrespective of their background, gender, race, sexual orientation, religion and ethnicity. We are committed to providing equal opportunity for all and believe that diversity in the workplace creates a more vibrant, richer work environment that advances the goals of our employees, communities and the business. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
By clicking the “Apply” button, I understand that my employment application process with Takeda will commence and that the information I provide in my application will be processed in line with Takeda’s Privacy Notice and Terms of Use. I further attest that all information I submit in my employment application is true to the best of my knowledge. Job Description: The Future Begins Here: At Takeda, we are leading digital evolution and global transformation. By building innovative solutions and future-ready capabilities, we are meeting the need of patients, our people, and the planet. Bengaluru, the city, which is India’s epicenter of Innovation, has been selected to be home to Takeda’s recently launched Innovation Capability Center. We invite you to join our digital transformation journey. In this role, you will have the opportunity to boost your skills and become the heart of an innovative engine that is contributing to global impact and improvement. At Takeda’s ICC we Unite in Diversity : Takeda is committed to creating an inclusive and collaborative workplace, where individuals are recognized for their backgrounds and abilities they bring to our company. We are continuously improving our collaborators journey in Takeda, and we welcome applications from all qualified candidates. Here, you will feel welcomed, respected, and valued as an important contributor to our diverse team. About The Role We are seeking an innovative and skilled Principal AI/ML Engineer with a strong focus on designing and deploying scalable machine learning solutions. This role requires a strategic thinker who can architect production-ready solutions, collaborate closely with cross-functional teams, and ensure adherence to Takeda’s technical standards through participation in the Architecture Council. The ideal candidate has extensive experience in operationalizing ML models, MLOps workflows, and building systems aligned with healthcare standards. By leveraging cutting-edge machine learning and engineering principles, this role supports Takeda’s global mission of delivering transformative therapies to patients worldwide. How You Will Contribute Architect scalable and secure machine learning systems that integrate with Takeda’s enterprise platforms, including R&D, manufacturing, and clinical trial operations. Design and implement pipelines for model deployment, monitoring, and retraining using advanced MLOps tools such as MLflow, Airflow, and Databricks. Operationalize AI/ML models for production environments, ensuring efficient CI/CD workflows and reproducibility. Collaborate with Takeda’s Architecture Council to propose and refine AI/ML system designs, balancing technical excellence with strategic alignment. Implement monitoring systems to track model performance (accuracy, latency, drift) in a production setting, using tools such as Prometheus or Grafana. Ensure compliance with industry regulations (e.g., GxP, GDPR) and Takeda’s ethical AI standards in system deployment. Identify use cases where machine learning can deliver business value, and propose enterprise-level solutions aligned to strategic goals. Work with Databricks tools and platforms for model management and data workflows, optimizing solutions for scalability. Manage and document the lifecycle of deployed ML systems, including versioning, updates, and data flow architecture. Drive adoption of standardized architecture and MLOps frameworks across disparate teams within Takeda. Skills And Qualifications Education Bachelors or Master’s or Ph.D. in Computer Science, Software Engineering, Data Science, or related field. Experience At least 6-8 years of experience in machine learning system architecture, deployment, and MLOps, with a significant focus on operationalizing ML at scale. Proven track record in designing and advocating ML/AI solutions within enterprise architecture frameworks and council-level decision-making. Technical Skills Proficiency in deploying and managing machine learning pipelines using MLOps tools like MLflow, Airflow, Databricks, or Clear ML. Strong programming skills in Python and experience with machine learning libraries such as Scikit-learn, XGBoost, LightGBM, and TensorFlow. Deep understanding of CI/CD pipelines and tools (e.g., Jenkins, GitHub Actions) for automated model deployment. Familiarity with Databricks tools and services for scalable data workflows and model management. Expertise in building robust observability and monitoring systems to track ML systems in production. Hands-on experience with classical machine learning techniques, such as random forests, decision trees, SVMs, and clustering methods. Knowledge of infrastructure-as-code tools like Terraform or CloudFormation to enable automated deployments. Experience in handling regulatory considerations and compliance in healthcare AI/ML implementations (e.g., GxP, GDPR). Soft Skills Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills for influencing technical and non-technical stakeholders. Leadership ability to mentor teams and drive architecture-standardization initiatives. Ability to manage projects independently and advocate for AI/ML adoption across Takeda. Preferred Qualifications Real-world experience operationalizing machine learning for pharmaceutical domains, including drug discovery, patient stratification, and manufacturing process optimization. Familiarity with ethical AI principles and frameworks, aligned with FAIR data standards in healthcare. Publications or contributions to AI research or MLOps tooling communities. WHAT TAKEDA ICC INDIA CAN OFFER YOU: Takeda is certified as a Top Employer, not only in India, but also globally. No investment we make pays greater dividends than taking good care of our people. At Takeda, you take the lead on building and shaping your own career. Joining the ICC in Bengaluru will give you access to high-end technology, continuous training and a diverse and inclusive network of colleagues who will support your career growth. BENEFITS: It is our priority to provide competitive compensation and a benefit package that bridges your personal life with your professional career. Amongst our benefits are Competitive Salary + Performance Annual Bonus Flexible work environment, including hybrid working Comprehensive Healthcare Insurance Plans for self, spouse, and children Group Term Life Insurance and Group Accident Insurance programs Health & Wellness programs including annual health screening, weekly health sessions for employees. Employee Assistance Program 5 days of leave every year for Voluntary Service in additional to Humanitarian Leaves Broad Variety of learning platforms Diversity, Equity, and Inclusion Programs No Meeting Days Reimbursements – Home Internet & Mobile Phone Employee Referral Program Leaves – Paternity Leave (4 Weeks), Maternity Leave (up to 26 weeks), Bereavement Leave (5 days) ABOUT ICC IN TAKEDA: Takeda is leading a digital revolution. We’re not just transforming our company; we’re improving the lives of millions of patients who rely on our medicines every day. As an organization, we are committed to our cloud-driven business transformation and believe the ICCs are the catalysts of change for our global organization. Locations: IND - Bengaluru Worker Type: Employee Worker Sub-Type: Regular Time Type: Full time Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for latency professionals in India is rapidly growing as industries continue to rely on real-time data processing and low latency systems. Latency jobs involve optimizing systems to reduce response time and improve overall performance, making them crucial for various sectors such as finance, e-commerce, and telecommunications.
Entry-level professionals in latency roles can expect to earn around INR 6-8 lakhs per annum, while experienced professionals with several years of experience can earn upwards of INR 20 lakhs per annum.
In the field of latency, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually moving on to roles like Architect or Chief Technology Officer.
In addition to expertise in latency optimization, professionals in this field are often expected to have strong skills in programming languages like Java, C++, or Python, as well as knowledge of networking protocols and systems architecture.
As you venture into the field of latency jobs in India, remember to hone your skills, stay updated with industry trends, and approach interviews with confidence. With the right preparation and determination, you can excel in this dynamic and rewarding career path. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.