Home
Jobs
Companies
Resume

9074 Tuning Jobs - Page 49

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

Redefine the future of customer experiences. One conversation at a time. We’re changing the game with a first-of-its-kind, conversation-centric platform that unifies team collaboration and customer experience in one place. Powered by AI, built by amazing humans. Our culture is forward-thinking, customer-obsessed and built on an unwavering belief that connection fuels business and life; connections to our customers with our signature Amazing Service®, our products and services, and most importantly, each other. Since 2008, 100,000+ companies and 1M+ users rely on Nextiva for customer and team communication. If you’re ready to collaborate and create with amazing people, let your personality shine and be on the frontlines of helping businesses deliver amazing experiences, you’re in the right place. Build Amazing - Deliver Amazing - Live Amazing - Be Amazing We are seeking a talented and experienced AI/NLP Engineer to join our dynamic team. The ideal candidate will have a strong background in Artificial Intelligence (AI) , Natural Language Processing (NLP) and Machine Learning (ML) , with proven expertise in fine-tuning, deploying, and optimizing state-of-the-art NLP models. This role demands a hands-on approach to designing, developing, and integrating NLP solutions tailored to solve complex business challenges. If you are passionate about cutting-edge AI, have a knack for deploying scalable models in production environments, and enjoy collaborating with cross-functional teams, we’d love to hear from you! Key Responsibilities Model Development and Fine-tuning: Customize, fine-tune, and optimize open-source NLP models (e.g., LSTM/RNN based, Transformer-based, Conformer-based, and VITS architecture) using transfer learning. Develop and train models from scratch when needed to address specific business challenges. Generative AI & Prompt Engineering: Design effective prompts for generative AI tasks to meet diverse user and business requirements. Build similarity models (e.g., symmetric/asymmetric - gte, bge, Mini-LM) using BERT, XLNET, and cross-encoders for LLM-based Retrieval-Augmented Generation (RAG) use cases. Model Deployment & Optimization: Deploy models at scale using Docker, Kubernetes, and cloud platforms like AWS. Optimize models for performance using techniques such as vLLM (Paged Attention), quantization, and adapter-based fine-tuning - LoRA, QLoRA. Performance Monitoring and Enhancement: Continuously evaluate and enhance model performance to ensure accuracy, scalability, and minimal latency. Build pipelines for data preprocessing, model serving, and monitoring using tools like Airflow and AWS Batch. Collaboration: Work closely with data scientists, software engineers, and product teams to integrate NLP models seamlessly into business applications. Required Skills NLP & Machine Learning: 3+ years of experience in NLP, with expertise in fine-tuning, transfer learning, and developing custom models. Hands-on experience in ASR, TTS, or custom language model architectures (e.g., Auto-regressive transformers, RWKV). Generative AI & Prompt Engineering: Proficiency in designing and crafting prompts for generative AI models aligned with business use cases. Programming Skills: Strong proficiency in Python, with experience in implementing NLP models and building pipelines. Model Deployment: Hands-on experience with Docker, Kubernetes, and cloud-based deployment on platforms such as AWS. Deep Learning Frameworks: Proficiency with TensorFlow, PyTorch, and libraries like Hugging Face Transformers, NLTK, SpaCy, and Gensim. Bonus Skills MLOps & Cloud Services: Experience with AWS (Lambda, Step Functions, Batch), GCP, Azure, or Nemo for model deployment and pipeline management. Large-Scale Deployments: Expertise in scaling AI models and managing SLAs for real-world applications. Preferred Qualifications Bachelor's or Master’s degree in Computer Science, Data Science, Engineering, or a related field. Proven track record of working with large datasets and creating robust pipelines for data ingestion and preprocessing. Strong understanding of neural networks and advanced deep learning architectures for NLP. Total Rewards Our Total Rewards offerings are designed to allow our employees to take care of themselves and their families so they can be their best, in and out of the office. Our compensation packages are tailored to each role and candidate's qualifications. We consider a wide range of factors, including skills, experience, training, and certifications, when determining compensation. We aim to offer competitive salaries or wages that reflect the value you bring to our team. Depending on the position, compensation may include base salary and/or hourly wages, incentives, or bonuses. Medical 🩺 – Medical insurance coverage is available for employees, their spouse, and up to two dependent children with a limit of 500,000 INR, as well as their parents or in-laws for up to 300,000 INR. This comprehensive coverage ensures that essential healthcare needs are met for the entire family unit, providing peace of mind and security in times of medical necessity. Group Term & Group Personal Accident Insurance 💼 - Provides insurance coverage against the risk of death / injury during the policy period sustained due to an accident caused by violent, visible & external means. Coverage Type - Employee Only Sum Insured - 3 times of annual CTC with minimum cap of INR 10,00,000 Free Cover Limit - 1.5 Crore Work-Life Balance ⚖️ - 15 days of Privilege leaves per calendar year, 6 days of Paid Sick leave per calendar year, 6 days of Casual leave per calendar year. Paid 26 weeks of Maternity leaves, 1 week of Paternity leave, a day off on your Birthday, and paid holidays Financial Security💰 - Provident Fund & Gratuity Wellness 🤸‍ - Employee Assistance Program and comprehensive wellness initiatives Growth 🌱 - Access to ongoing learning and development opportunities and career advancement At Nextiva, we're committed to supporting our employees' health, well-being, and professional growth. Join us and build a rewarding career! Established in 2008 and headquartered in Scottsdale, Arizona, Nextiva secured $200M from Goldman Sachs in late 2021, valuing the company at $2.7B.To check out what’s going on at Nextiva, check us out on Instagram, Instagram (MX), YouTube, LinkedIn, and the Nextiva blog. Show more Show less

Posted 4 days ago

Apply

6.0 years

0 Lacs

Greater Chennai Area

On-site

Linkedin logo

Redefine the future of customer experiences. One conversation at a time. We’re changing the game with a first-of-its-kind, conversation-centric platform that unifies team collaboration and customer experience in one place. Powered by AI, built by amazing humans. Our culture is forward-thinking, customer-obsessed and built on an unwavering belief that connection fuels business and life; connections to our customers with our signature Amazing Service®, our products and services, and most importantly, each other. Since 2008, 100,000+ companies and 1M+ users rely on Nextiva for customer and team communication. If you’re ready to collaborate and create with amazing people, let your personality shine and be on the frontlines of helping businesses deliver amazing experiences, you’re in the right place. Build Amazing - Deliver Amazing - Live Amazing - Be Amazing We are seeking a talented and experienced AI/NLP Engineer to join our dynamic team. The ideal candidate will have a strong background in Artificial Intelligence (AI) , Natural Language Processing (NLP) and Machine Learning (ML) , with proven expertise in fine-tuning, deploying, and optimizing state-of-the-art NLP models. This role demands a hands-on approach to designing, developing, and integrating NLP solutions tailored to solve complex business challenges. If you are passionate about cutting-edge AI, have a knack for deploying scalable models in production environments, and enjoy collaborating with cross-functional teams, we’d love to hear from you! Key Responsibilities Model Development and Fine-tuning: Customize, fine-tune, and optimize open-source NLP models (e.g., Transformer-based, Conformer-based, and VITS architecture) using transfer learning. Develop and train models from scratch when needed to address specific business challenges. Generative AI & Prompt Engineering: Design effective prompts for generative AI tasks to meet diverse user and business requirements. Build similarity models (e.g., symmetric/asymmetric) using BERT, XLNET, and cross-encoders for LLM-based Retrieval-Augmented Generation (RAG) use cases. Model Deployment & Optimization: Deploy models at scale using Docker, Kubernetes, and cloud platforms like AWS. Optimize models for performance using techniques such as vLLM (Paged Attention), quantization, and adapter-based fine-tuning - LoRA, QLoRA. Performance Monitoring and Enhancement: Continuously evaluate and enhance model performance to ensure accuracy, scalability, and minimal latency. Build pipelines for data preprocessing, model serving, and monitoring using tools like Airflow and AWS Batch. Collaboration: Work closely with data scientists, software engineers, and product teams to integrate NLP models seamlessly into business applications. Required Skills NLP & Machine Learning: 6+ years of experience in NLP, with expertise in fine-tuning, transfer learning, and developing custom models. Hands-on experience in ASR, TTS, or custom language model architectures (e.g., Auto-regressive transformers, RWKV). Generative AI & Prompt Engineering: Proficiency in designing and crafting prompts for generative AI models aligned with business use cases. Programming Skills: Strong proficiency in Python, with experience in implementing NLP models and building pipelines. Model Deployment: Hands-on experience with Docker, Kubernetes, and cloud-based deployment on platforms such as AWS. Deep Learning Frameworks: Proficiency with TensorFlow, PyTorch, and libraries like Hugging Face Transformers, NLTK, SpaCy, and Gensim. Bonus Skills MLOps & Cloud Services: Experience with AWS (Lambda, Step Functions, Batch), GCP, Azure, or Nemo for model deployment and pipeline management. Large-Scale Deployments: Expertise in scaling AI models and managing SLAs for real-world applications. Preferred Qualifications Bachelor's or Master’s degree in Computer Science, Data Science, Engineering, or a related field. Proven track record of working with large datasets and creating robust pipelines for data ingestion and preprocessing. Strong understanding of neural networks and advanced deep learning architectures for NLP. Total Rewards Our Total Rewards offerings are designed to allow our employees to take care of themselves and their families so they can be their best, in and out of the office. Our compensation packages are tailored to each role and candidate's qualifications. We consider a wide range of factors, including skills, experience, training, and certifications, when determining compensation. We aim to offer competitive salaries or wages that reflect the value you bring to our team. Depending on the position, compensation may include base salary and/or hourly wages, incentives, or bonuses. Medical 🩺 - Medical insurance coverage is available for employees, their spouse, and up to two dependent children with a limit of 500,000 INR, as well as their parents or in-laws for up to 300,000 INR. This comprehensive coverage ensures that essential healthcare needs are met for the entire family unit, providing peace of mind and security in times of medical necessity. Group Term & Group Personal Accident Insurance 💼 - Provides insurance coverage against the risk of death / injury during the policy period sustained due to an accident caused by violent, visible & external means. Coverage Type - Employee Only Sum Insured - 3 times of annual CTC with minimum cap of INR 10,00,000 Free Cover Limit - 1.5 Crore Work-Life Balance ⚖️ - 15 days of Privilege leaves per calendar year, 6 days of Paid Sick leave per calendar year, 6 days of Casual leave per calendar year. Paid 26 weeks of Maternity leaves, 1 week of Paternity leave, a day off on your Birthday, and paid holidays Financial Security💰 - Provident Fund & Gratuity Wellness 🤸‍ - Employee Assistance Program and comprehensive wellness initiatives Growth 🌱 - Access to ongoing learning and development opportunities and career advancement At Nextiva, we're committed to supporting our employees' health, well-being, and professional growth. Join us and build a rewarding career! Established in 2008 and headquartered in Scottsdale, Arizona, Nextiva secured $200M from Goldman Sachs in late 2021, valuing the company at $2.7B.To check out what’s going on at Nextiva, check us out on Instagram, Instagram (MX), YouTube, LinkedIn, and the Nextiva blog. Show more Show less

Posted 4 days ago

Apply

2.0 years

0 Lacs

Dholera, Gujarat, India

On-site

Linkedin logo

About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement scalable offline data pipelines for manufacturing systems including AMHS, MES, SCADA, PLCs, vision systems, and sensor data. Design and optimize ETL/ELT workflows using Python, Spark, SQL, and orchestration tools (e.g., Airflow) to transform raw data into actionable insights. Lead database design and performance tuning across SQL and NoSQL systems, optimizing schema design, queries, and indexing strategies for manufacturing data. Enforce robust data governance by implementing data quality checks, lineage tracking, access controls, security measures, and retention policies. Optimize storage and processing efficiency through strategic use of formats (Parquet, ORC), compression, partitioning, and indexing for high-performance analytics. Implement streaming data solutions (using Kafka/RabbitMQ) to handle real-time data flows and ensure synchronization across control systems. Building dashboards using analytics tools like Grafana. Good Understanding of Hadoop ecosystem. Develop standardized data models and APIs to ensure consistency across manufacturing systems and enable data consumption by downstream applications. Collaborate cross-functionally with Platform Engineers, Data Scientists, Automation teams, IT Operations, Manufacturing, and Quality departments. Mentor junior engineers while establishing best practices, documentation standards, and fostering a data-driven culture throughout the organization. Essential Attributes - Expertise in Python programming for building robust ETL/ELT pipelines and automating data workflows. Proficiency with Hadoops ecosystem. Hands-on experience with Apache Spark (PySpark) for distributed data processing and large-scale transformations. Strong proficiency in SQL for data extraction, transformation, and performance tuning across structured datasets. Proficient in using Apache Airflow to orchestrate and monitor complex data workflows reliably. Skilled in real-time data streaming using Kafka or RabbitMQ to handle data from manufacturing control systems. Experience with both SQL and NoSQL databases, including PostgreSQL, Timescale DB, and MongoDB, for managing diverse data types. In-depth knowledge of data lake architectures and efficient file formats like Parquet and ORC for high-performance analytics. Proficient in containerization and CI/CD practices using Docker and Jenkins or GitHub Actions for production-grade deployments. Strong understanding of data governance principles, including data quality, lineage tracking, and access control. Ability to design and expose RESTful APIs using FastAPI or Flask to enable standardized and scalable data consumption. Qualifications - BE/ME Degree in Computer science, Electronics, Electrical Desired Experience Level - Masters+ 2 Years of relevant experience. Bachelors+4 Years of relevant experience. Experience with semiconductor industry is a plus Show more Show less

Posted 4 days ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Technical Skills: 8+ years of hands-on experience in SQL development, query optimization, and performance tuning. Expertise in ETL tools (SSIS, Azure ADF, Databricks, Snowflake or similar) and relational databases (SQL Server, PostgreSQL, MySQL, Oracle). Strong understanding of data warehousing concepts, data modeling, indexing strategies, and query execution plans. Proficiency in writing efficient stored procedures, views, triggers, and functions for large datasets. Experience working with structured and semi-structured data (CSV, JSON, XML, Parquet). Hands-on experience in data validation, cleansing, and reconciliation to maintain high data quality. Exposure to real-time and batch data processing techniques. Nice-to-have: Experience with Azure/Other Data Engineering (ADF, Azure SQL, Synapse, Databricks, Snowflake), Python, Spark, NoSQL databases, and reporting tools like Power BI or Tableau. Strong problem-solving skills and the ability to troubleshoot ETL failures and performance issues. Ability to collaborate with business and analytics teams to understand and implement data requirements. Show more Show less

Posted 4 days ago

Apply

2.0 years

0 Lacs

Dholera, Gujarat, India

On-site

Linkedin logo

About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement scalable offline data pipelines for manufacturing systems including AMHS, MES, SCADA, PLCs, vision systems, and sensor data. Design and optimize ETL/ELT workflows using Python, Spark, SQL, and orchestration tools (e.g., Airflow) to transform raw data into actionable insights. Lead database design and performance tuning across SQL and NoSQL systems, optimizing schema design, queries, and indexing strategies for manufacturing data. Enforce robust data governance by implementing data quality checks, lineage tracking, access controls, security measures, and retention policies. Optimize storage and processing efficiency through strategic use of formats (Parquet, ORC), compression, partitioning, and indexing for high-performance analytics. Implement streaming data solutions (using Kafka/RabbitMQ) to handle real-time data flows and ensure synchronization across control systems. Building dashboards using analytics tools like Grafana. Good Understanding of Hadoop ecosystem. Develop standardized data models and APIs to ensure consistency across manufacturing systems and enable data consumption by downstream applications. Collaborate cross-functionally with Platform Engineers, Data Scientists, Automation teams, IT Operations, Manufacturing, and Quality departments. Mentor junior engineers while establishing best practices, documentation standards, and fostering a data-driven culture throughout the organization. Essential Attributes - Expertise in Python programming for building robust ETL/ELT pipelines and automating data workflows. Proficiency with Hadoops ecosystem. Hands-on experience with Apache Spark (PySpark) for distributed data processing and large-scale transformations. Strong proficiency in SQL for data extraction, transformation, and performance tuning across structured datasets. Proficient in using Apache Airflow to orchestrate and monitor complex data workflows reliably. Skilled in real-time data streaming using Kafka or RabbitMQ to handle data from manufacturing control systems. Experience with both SQL and NoSQL databases, including PostgreSQL, Timescale DB, and MongoDB, for managing diverse data types. In-depth knowledge of data lake architectures and efficient file formats like Parquet and ORC for high-performance analytics. Proficient in containerization and CI/CD practices using Docker and Jenkins or GitHub Actions for production-grade deployments. Strong understanding of data governance principles, including data quality, lineage tracking, and access control. Ability to design and expose RESTful APIs using FastAPI or Flask to enable standardized and scalable data consumption. Qualifications - BE/ME Degree in Computer science, Electronics, Electrical Desired Experience Level - Masters+ 2 Years of relevant experience. Bachelors+4 Years of relevant experience. Experience with semiconductor industry is a plus Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Dholera, Gujarat, India

On-site

Linkedin logo

About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement a scalable, offline Data Lake for structured, semi-structured, and unstructured data in an on-premises, air-gapped environment. Collaborate with Data Engineers, Factory IT, and Edge Device teams to enable seamless data ingestion and retrieval across the platform. Integrate with upstream systems like MES, SCADA, and process tools to capture high-frequency manufacturing data efficiently. Monitor and maintain system health, including compute resources, storage arrays, disk I/O, memory usage, and network throughput. Optimize Data Lake performance via partitioning, deduplication, compression (Parquet/ORC), and implementing effective indexing strategies. Select, integrate, and maintain tools like Apache Hadoop, Spark, Hive, HBase, and custom ETL pipelines suitable for offline deployment. Build custom ETL workflows for bulk and incremental data ingestion using Python, Spark, and shell scripting. Implement data governance policies covering access control, retention periods, and archival procedures with security and compliance in mind. Establish and test backup, failover, and disaster recovery protocols specifically designed for offline environments. Document architecture designs, optimization routines, job schedules, and standard operating procedures (SOPs) for platform maintenance. Conduct root cause analysis for hardware failures, system outages, or data integrity issues. Drive system scalability planning for multi-fab or multi-site future expansions. Essential Attributes (Tech-Stacks) - Hands-on experience designing and maintaining offline or air-gapped Data Lake environments. Deep understanding of Hadoop ecosystem tools: HDFS, Hive, Map-Reduce, HBase, YARN, zookeeper and Spark. Expertise in custom ETL design, large-scale batch and stream data ingestion. Strong scripting and automation capabilities using Bash and Python. Familiarity with data compression formats (ORC, Parquet) and ingestion frameworks (e.g., Flume). Working knowledge of message queues such as Kafka or RabbitMQ, with focus on integration logic. Proven experience in system performance tuning, storage efficiency, and resource optimization. Qualifications - BE/ ME in Computer science, Machine Learning, Electronics Engineering, Applied mathematics, Statistics. Desired Experience Level - 4 Years relevant experience post Bachelors 2 Years relevant experience post Masters Experience with semiconductor industry is a plus Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

Dholera, Gujarat, India

On-site

Linkedin logo

About The Business - Tata Electronics Private Limited (TEPL) is a greenfield venture of the Tata Group with expertise in manufacturing precision components. Tata Electronics (a wholly owned subsidiary of Tata Sons Pvt. Ltd.) is building India’s first AI-enabled state-of-the-art Semiconductor Foundry. This facility will produce chips for applications such as power management IC, display drivers, microcontrollers (MCU) and high-performance computing logic, addressing the growing demand in markets such as automotive, computing and data storage, wireless communications and artificial intelligence. Tata Electronics is a subsidiary of the Tata group. The Tata Group operates in more than 100 countries across six continents, with the mission 'To improve the quality of life of the communities we serve globally, through long term stakeholder value creation based on leadership with Trust.’ Job Responsibilities - Architect and implement a scalable, offline Data Lake for structured, semi-structured, and unstructured data in an on-premises, air-gapped environment. Collaborate with Data Engineers, Factory IT, and Edge Device teams to enable seamless data ingestion and retrieval across the platform. Integrate with upstream systems like MES, SCADA, and process tools to capture high-frequency manufacturing data efficiently. Monitor and maintain system health, including compute resources, storage arrays, disk I/O, memory usage, and network throughput. Optimize Data Lake performance via partitioning, deduplication, compression (Parquet/ORC), and implementing effective indexing strategies. Select, integrate, and maintain tools like Apache Hadoop, Spark, Hive, HBase, and custom ETL pipelines suitable for offline deployment. Build custom ETL workflows for bulk and incremental data ingestion using Python, Spark, and shell scripting. Implement data governance policies covering access control, retention periods, and archival procedures with security and compliance in mind. Establish and test backup, failover, and disaster recovery protocols specifically designed for offline environments. Document architecture designs, optimization routines, job schedules, and standard operating procedures (SOPs) for platform maintenance. Conduct root cause analysis for hardware failures, system outages, or data integrity issues. Drive system scalability planning for multi-fab or multi-site future expansions. Essential Attributes (Tech-Stacks) - Hands-on experience designing and maintaining offline or air-gapped Data Lake environments. Deep understanding of Hadoop ecosystem tools: HDFS, Hive, Map-Reduce, HBase, YARN, zookeeper and Spark. Expertise in custom ETL design, large-scale batch and stream data ingestion. Strong scripting and automation capabilities using Bash and Python. Familiarity with data compression formats (ORC, Parquet) and ingestion frameworks (e.g., Flume). Working knowledge of message queues such as Kafka or RabbitMQ, with focus on integration logic. Proven experience in system performance tuning, storage efficiency, and resource optimization. Qualifications - BE/ ME in Computer science, Machine Learning, Electronics Engineering, Applied mathematics, Statistics. Desired Experience Level - 4 Years relevant experience post Bachelors 2 Years relevant experience post Masters Experience with semiconductor industry is a plus Show more Show less

Posted 4 days ago

Apply

10.0 years

0 Lacs

India

On-site

Linkedin logo

Company Description 👋🏼 We're Nagarro , We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale across all devices and digital mediums, and our people exist everywhere in the world (18000+ experts across 38 countries, to be exact). Our work culture is dynamic and non-hierarchical. We are looking for great new colleagues. That is where you come in! Job Description REQUIREMENTS: Total Experience 10+years. Hands-on working experience in Oracle Cloud/Fusion applications. Strong expertise in: Oracle Fusion Data Intelligence (FDI), Oracle Analytics Cloud (OAC), Oracle Autonomous Data Warehouse (ADW), and Oracle Data Integrator (ODI). In-depth understanding of Oracle Cloud ERP data structures and subject areas. Proficient in SQL, PL/SQL, and Oracle reporting frameworks. Experience with data modelling, ETL processes, and performance tuning. Design and develop dashboards, KPIs, and reports across Finance, Supply Chain, and HCM. Troubleshoot performance issues and ensure data accuracy and consistency. Stay current with Oracle Cloud innovations and data integration strategies. Exposure to Oracle Cloud Infrastructure (OCI), object storage, and data lakes. Strong communication and coordination skills to work with cross-functional and globally distributed teams. RESPONSIBILITIES: Understanding the projects functional and non-functional requirements and the business context of the application being developed. Understanding and documenting requirements validated by the SMEs Interacting with clients to identify the scope of testing, expectations, acceptance criteria and availability of test data and environment. Working closely with product owner in defining and refining acceptance criteria. Preparing test plan/strategy Estimating the test effort and preparing schedules for testing activities, assigning tasks, identifying constraints and dependencies Risk management – identifying, mitigating and resolving business and technical risks. Determines the potential causes of problems and analyses multiple alternatives. Designing and developing a framework for automated testing following the project's design and coding guidelines. Set up best practices for test automation. Preparing test reports to summarize the outcome of the testing phase and recommending whether the application is in a shippable state or not Communicating measurable quality metrics, with the ability to highlight problem areas and suggest solutions Participating in retrospective meetings, helping identify the root cause of any quality related issue and identifying ways to continuously improve the testing process Conducting demos of the application for internal and external stakeholders Working with team and stakeholders to triage and prioritize defects for resolution Giving constructive feedback to the team members and setting clear expectations Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Show more Show less

Posted 4 days ago

Apply

12.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Title: Oracle DBA Experience: 8–12 Years Location: Chennai, Bangalore Employment Type: Full-Time Job Summary: We are seeking an experienced and highly skilled Oracle Database Administrator (DBA) with strong Python programming skills to join our technology team. The ideal candidate will be responsible for managing complex Oracle database environments, ensuring their performance, availability, and security, while also building automation scripts and tools using Python to streamline database operations. Key Responsibilities: Design, install, configure, and maintain Oracle databases (versions 11g/12c/19c) in production and non-production environments. Perform proactive monitoring, tuning, and capacity planning to ensure high performance and availability. Implement and manage backup, recovery, and disaster recovery strategies. Apply patches and upgrades and perform security hardening as per compliance and audit requirements. Write and maintain Python scripts for automation of database tasks, performance reports, monitoring tools, and health checks. Troubleshoot and resolve database issues, including slow queries, locking, and replication failures. Work closely with development teams to support application releases, schema design, and data migration. Design and implement data archiving, partitioning, and retention strategies. Support cloud and on-prem database environments, including migration to cloud platforms (AWS/Azure/OCI). Document database procedures, configurations, and best practices. Required Skills & Experience: 8–12 years of experience as an Oracle DBA , managing large-scale enterprise-grade databases. Proficient in Python scripting for automation, orchestration, and operational support. Solid understanding of database security, auditing, and compliance requirements. Familiarity with DevOps tools and CI/CD integration is a plus. Experience with Oracle Enterprise Manager (OEM) and monitoring tools. Exposure to cloud database services (OCI, AWS RDS, or Azure DB) is preferred. Excellent analytical, problem-solving, and communication skills. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related field. Oracle Certified Professional (OCP) or other relevant certifications. Experience working in agile environments and with cross-functional teams. Knowledge of other databases (PostgreSQL, MySQL) is a plus. Why Join Us: Work on high-impact projects in a modern data infrastructure environment. Opportunity to lead automation and innovation initiatives. Flexible work culture and learning opportunities. Show more Show less

Posted 4 days ago

Apply

10.0 years

0 Lacs

India

On-site

Linkedin logo

Maximize Your Impact with TP Welcome to TP, a global hub of innovation and empowerment, where we redefine the future. With a remarkable €10 billion annual revenue and a global team of 500,000 employees serving 170 countries in over 300 languages, we lead in intelligent, digital-first solutions. As a globally certified Great Place to Work in 72 countries, our culture thrives on diversity, equity, and inclusion. We value your unique perspective and believe that your talent is the missing piece that completes our vision for a brighter, digitally driven tomorrow. The Opportunity Teleperformance’s AIML/GenAI ecosystem is a fast-paced and dynamic environment. As the Global AI Strategy & Solutions Lead , you will collaborate across TP.ai to drive the delivery of high-impact AI solutions aligned with business objectives. In this role, you will lead the development and execution of a strategic research and delivery of our unique solution offering agenda in AI, ensuring the implementation of cutting-edge capabilities for our clients and markets. You will oversee the end-to-end lifecycle of AIML/GenAI solutions, from gathering initial requirements to selection, design, and scalable deployment—while ensuring cross-functional alignment between operations and engineering teams. Additionally, you will play a key role in understanding client priorities, aligning AI operations to business needs, and optimizing resource capacity planning. Maintaining strong customer relationships and ensuring seamless execution of AI initiatives will be critical to success in this role. The Responsibilities & Duties Define, structure, and execute strategic AI/ML initiatives focused on production operations, ensuring effective governance of AIML platforms and the development of an AI operating and engagement model. Identify, diagnose, and resolve solution and performance challenges by researching and applying the latest AI advancements to enhance core business products. Develop AI acceleration strategies and collaborate across teams to integrate AIML/GenAI solutions into internal and external platforms. Design and apply data analysis methodologies, including data mining, statistics, machine learning, NLP, sentiment analysis, and text mining, to drive insights. Lead initiatives to enhance product and process quality, improving efficiency and scalability. Build and sustain a high-performing AI Solutions team by recruiting top talent, providing ongoing training, and mentoring staff. Unify, enrich, and analyze customer data to generate actionable insights and new business opportunities. Leverage existing in-house data platforms and recommend/build new solutions to exceed business requirements. Clearly communicate findings, recommendations, and optimization strategies for data systems and AI solutions. Partner with the business development team to design and execute AIML/GenAI solutions based on client needs. Demonstrate a deep understanding of AI concepts, tools, and methodologies, and effectively mentor teams on best practices. Apply data-driven approaches to align AI/ML solutions with specific business outcomes. Collaborate, influence, and build consensus across teams through strong relationships and active listening. The Qualifications BA/BS or Masters in Computer Science, Data Science, Artificial Intelligence, Engineering, Mathematics, or a related technical field. 10+ years of experiencein AI/ML, Generative AI, data science, or related fields, with a focus on strategy, solutioning, and implementation. Proven experience leading global AI/ML initiatives, driving AI transformation, and scaling AI/ML solutions across multiple industries. Strong knowledge of machine learning, deep learning, natural language processing (NLP), computer vision, and Generative AI frameworks (e.g., TensorFlow, Hugging Face, OpenAI etc). Experience with data services such as data labeling, annotation, synthetic data generation, and data pipeline management. Expertise in MLOps, AI model lifecycle management, and AI/ML infrastructure optimization. Familiarity with AI model training, fine-tuning, and deploymentin cloud environments (e.g., AWS, Google Cloud, Azure). Understanding of AI ethics, bias mitigation, and Trust & Safety principles related to AI/ML and GenAI applications. Deep understanding of AI security, model robustness, and adversarial attacks. Familiarity with federated learning, privacy-preserving AI, and regulatory AI frameworks. Proven ability to define and execute AI/ML strategies, aligning solutions with business goals and market trends. Experience in leading cross-functional teams, including data scientists, AI engineers, product managers, and business stakeholders. Strong background in AI governance, risk management, and regulatory compliance (GDPR, CCPA, AI Act, etc.). Experience in designing and implementing AI Center of Excellence (CoE)and AI operating models. Ability to translate complex AI concepts into actionable business strategies and clearly communicate with executives and non-technical stakeholders. Strong experience in AI-driven product development, GTM strategy, and AI monetization models. Expertise in managing AI partnerships, vendor selection, and external collaborations. Exceptional problem-solving, analytical thinking, and decision-making skills. Foster a collaborative and team-oriented approach when working with large teams. Ph.D. in Artificial Intelligence, Machine Learning, Data Science, Computer Science, Engineering, or a related technical field. 15+ years of experiencein AI/ML, GenAI, or related fields, with extensive leadership in AI strategy, innovation, and implementation at a global scale. Proven experience driving AI adoption across diverse industries such as technology, finance, healthcare, retail, and media. Strong background in AI research and development (R&D)and productization of AI/ML models. Hands-on experience with cutting-edge AI technologies, including LLMs (Large Language Models), multimodal AI, reinforcement learning, and edge AI. Experience in building and scaling AI Centers of Excellence (CoE) within large enterprises. Strong track record in AI-driven business transformation and innovation strategy. Ability to influence C-suite executives and board-level discussionson AI investments and roadmaps. Proven expertise in AI ethics, responsible AI development, and regulatory compliance (GDPR, CCPA, EU AI Act, etc.). Experience launching AI-based products and services, including pricing and monetization strategies. Established thought leadership in AI/ML, with publications, patents, or keynote speaking engagements at industry conferences (e.g., NeurIPS, AI Summit, TrustCon, Data+AI Summit). Strong network and relationships within AI research communities, academia, startups, and industry consortia. Experience securing AI-related funding, partnerships, and alliances with BigTech, startups, and regulatory bodies. Pre-Employment Screenings By TP policy, employment in this position will be contingent on your successful completion and passage of a comprehensive background check, including global sanctions and watch list screening. Important | Policy on Unsolicited Third-Party Candidate Submissions TP does not accept candidate submissions from unsolicited third parties, including recruiters or headhunters. Applications will not be considered, and no contractual association will be established through such submissions. Diversity, Equity & Inclusion At TP, we are committed to fostering a diverse, equitable, and inclusive workplace. We welcome individuals from all backgrounds and lifestyles and do not discriminate based on gender identity or expression, sexual orientation, race, religion, age, national origin, citizenship, disability, pregnancy status, veteran status, or other differences. Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Bhubaneswar, Odisha, India

On-site

Linkedin logo

Role- Technical Lead Location - Bhubaneshwar, Kolkata Experience- 4-12yrs Desired Competencies (Technical/Behavioral Competency) Must-Have** (Ideally should not be more than 3-5) 1 ) Good knowledge in PL-SQL programming of Procedures,Functions,Triggers, Scripts development using Batch Framework(PL/SQL and Unix) 2 ) Creation of database tables and other schema objects 3 ) Good knowledge of SQL joins,cursors 4 ) Good knowledge of using SQL Developer tool 5 ) Creation of low level technical design document 6 ) Knowledge of SQLPLUS tool for loading flat files to database Good-to-Have 1 ) PL-SQL performance tuning knowledge 2 ) Knowledge of basic Unix commands SN Responsibility of / Expectations from the Role 1 Scripts development using Batch Framework(PL/SQL and Unix) 2 Assigning the Technical Tasks to Team on daily basis 3 Guiding Team to build Interface jobs End to End jobs and transformation scripts using Batch framework. 4 Providing technical solutions to team members, Unit Test results review and Code Review. 5 Preparing the Technical Design Document, UDY enrollment, 6 Control-M template preparation and working with BY team to setup the jobs 7 BY ACT setup and configuration, BY Worksheet, Workbenches, FE pages, searches, Optionsets configuration 8 SCPO Configuration - Nodepools, Access, NodePool Memory etc. 9 SRE node pool and server restart scripts, BY SRE process development, testing and output validation. 10 Production deployement co-ordination with BY team 11 Technical KT sessions to TSL IT and BY Teams after or before Prod Migration Show more Show less

Posted 4 days ago

Apply

10.0 years

0 Lacs

India

On-site

Linkedin logo

Job Description REQUIREMENTS: Total Experience 10+ Years. Strong working experience in Dynamics 365 CRM and Power Platform. Expertise in end-to-end DCRM Sales & DCRM Service implementation. Strong customization experience, including Custom Entities, Workflows, Plugins, Web Resources, and Custom API Integrations. Hands-on experience with Power Automate, Custom Business Logic, and Webhooks. Experience integrating Azure Functions, Logic Apps, and third-party APIs with DCRM. Experience with Power BI dashboards for CRM analytics. Expertise in Performance Tuning, Security Model Design, and Governance for DCRM solutions. Experience in Mobile-Optimized CRM Customization for D365 Sales & Service. Hands on experience with Power pages(power portals) Ability to work independently while participating in design, development, and implementation of application systems. Actively seeks and shares knowledge with others, effectively communicating and presenting information in a clear and organized manner. Ability to resolve challenges efficiently, working closely with project managers and executive management when needed. Strong verbal and written communication skills to ensure that your ideas, strategies, and successes resonate within the team and with clients. RESPONSIBILITY: Writing and reviewing great quality code. Understanding functional requirements thoroughly and analysing the client’s needs in the context of the project. Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it. Determining and implementing design methodologies and tool sets. Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs. Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it. Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement. Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving of complex bugs. Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken. Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field. Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

The purpose of this role is to develop required software features, achieving timely delivery in compliance with the performance and quality standards of the company. Job Description: Key Skills required: JavaScript, React.js (Front-End), and Python (Back-End) Full Stack App Dev, Front-End + Back-End Development both, API and Service Dev Azure Cloud Native App Development Azure Functions, Serverless, Azure App Service Role purpose Build and enhance digital products and platforms using the Azure Cloud Native App Dev Stack and Azure PaaS Solutions Build and extend full-stack web app solutions across front-end and back-end, using React.js + JavaScript on front-end and Python + Serverless on the back-end Refactor, optimize and improve the existing codebase for maintenance and scale Interface with clients and internal teams to gather requirements and develop software solutions Convey effectively all task progress, evaluations, suggestions, schedules along with technical and process issues Document the development process, architecture, and standard components Coordinate with co-developers and keeps project manager and technical architect well informed of the status of development effort and serves as liaison between development staff Professional // Must-Have Skills Expert Developer in Full-Stack Web App Development Front-End Web App Development using React.js, Styled Components, Context API, and the React/Redux ecosystem Good hands-on experience with building scalable web applications. Working experience of Tailwind, Material UI frameworks. Knowledge of testing frameworks like Mocha and Jest. Should be expert in working HTML, CSS and other front-end technologies. Should be able to build and deploy NPM based web apps. Should be able to work with and leverage existing component libraries. Expert in Back-End Service Development using Python code deployed inside Serverless Functions Good hands-on experience building Serverless Functions in the cloud Good hands-on experience building Python back-end APIs and services Good hands-on experience using various Python libraries Expert in Azure Cloud Native App Development using Azure PaaS Services Building and Deploying Web Apps to Azure App Service Building and Deployment Serverless functions to Azure Functions Using Logic Apps to build-end automation workflows Some experience with Azure IAM, and Security concepts Working experience with REST APIs and Custom component renders using data. Experience writing and utilizing RESTful API services and performance tuning large-scale apps Should have hands-on experience working with Enterprise GITHub and Git flow Deep knowledge of object-oriented and functional programming. Good communication, and can interact with client teams Nice-to-Have capabilities : Hands-on with Web Frameworks like NEXT.js and using them to build highly performant webapps Hands-on with Node.js and related server-side JavaScript frameworks for server-side app development – including Express.js, and NEST.js Hands-on working experience with Server-side JavaScript Frameworks for building Domain-driven Micro Services, including Nest.js and Express.js Knowledge of working with GraphQL and integrating GraphQL with Express.js or NEST.js Knowledge of working with API Management and API Gateways Hands-on experience with deploying full stack MERN apps to a cloud platform such as Azure and AWS Experience working with container apps and containerized environments Experience working on enterprise integration projects is a plus Experience working with Gen-AI and LLM Technology is a plus Location: DGS India - Pune - Kharadi EON Free Zone Brand: Dentsu Creative Time Type: Full time Contract Type: Consultant Show more Show less

Posted 4 days ago

Apply

2.0 years

0 Lacs

Sahibzada Ajit Singh Nagar, Punjab, India

On-site

Linkedin logo

Key Responsibilities : ● Develop and maintain web applications using Django and Flask frameworks. ● Design and implement RESTful APIs using Django Rest Framework (DRF). ● Deploy, manage, and optimize applications on AWS services, including EC2, S3, RDS, Lambda, and CloudFormation. ● Build and integrate APIs for AI/ML models into existing systems. ● Create scalable machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn. ● Implement transformer architectures (e.g., BERT, GPT) for NLP and other advanced AI use cases. ● Optimize machine learning models through advanced techniques such as hyperparameter tuning, pruning, and quantization. ● Deploy and manage machine learning models in production environments using tools like TensorFlow Serving, TorchServe, and AWS SageMaker. ● Ensure the scalability, performance, and reliability of applications and deployed models. ● Collaborate with cross-functional teams to analyze requirements and deliver effective technical solutions. ● Write clean, maintainable, and efficient code following best practices. ● Conduct code reviews and provide constructive feedback to peers. ● Stay up-to-date with the latest industry trends and technologies, particularly in AI/ML. Required Skills and Qualifications : ● Master’s degree in Computer Science, Engineering, or a related field. ● 2+ years of professional experience as a Python Developer. ● Proficient in Python with a strong understanding of its ecosystem. ● Extensive experience with Django and Flask frameworks. ● Hands-on experience with AWS services for application deployment and management. ● Strong knowledge of Django Rest Framework (DRF) for building APIs. ● Expertise in machine learning frameworks such as PyTorch, TensorFlow, and scikit-learn. ● Experience with transformer architectures for NLP and advanced AI solutions. ● Solid understanding of SQL and NoSQL databases (e.g., PostgreSQL, MongoDB). ● Familiarity with MLOps practices for managing the machine learning lifecycle. ● Basic knowledge of front-end technologies (e.g., JavaScript, HTML, CSS) is a plus. ● Excellent problem-solving skills and the ability to work independently and as part of a team. Show more Show less

Posted 4 days ago

Apply

4.0 years

0 Lacs

India

Remote

Linkedin logo

Hi Everyone Role : Data Scientist - Gen AI, AI/ML Exp - 4+yr Shift : General IST 8 hour shift Position Type : Remote & Contractual JD : About the Role: We are seeking a passionate and results-driven Data Scientist with deep expertise in Artificial Intelligence, Machine Learning, and Generative AI (GenAI) . In this role, you will work at the intersection of data science and cutting-edge AI technologies to build intelligent systems, drive automation, and unlock business value from data. Key Responsibilities: Design, develop, and deploy AI/ML models and GenAI-based applications to solve real-world business problems. Conduct data wrangling, preprocessing, feature engineering, and advanced analytics. Build NLP, computer vision, recommendation, or predictive models using traditional and deep learning methods. Collaborate with cross-functional teams including product managers, data engineers, and software developers. Stay current with advancements in AI/ML and apply best practices to improve existing systems. Design and evaluate LLM-based pipelines , prompt engineering strategies, and fine-tuning approaches. Present insights and results to stakeholders in a clear and actionable manner. Work on cloud-based ML pipelines (AWS/GCP/Azure) and MLOps frameworks for deployment and monitoring. Key Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or related field. 4+ years of experience in data science and machine learning . Strong hands-on skills in Python , SQL , and ML libraries like scikit-learn, TensorFlow, PyTorch, Hugging Face Transformers . Proficiency in GenAI use cases (text generation, summarization, image generation, etc.) and LLMs (OpenAI GPT, BERT, etc.). Experience with prompt engineering , RAG pipelines , LangChain , or vector databases is highly desirable. Strong understanding of data structures, algorithms, and model evaluation techniques. Exposure to cloud platforms (AWS/GCP/Azure) and containerization tools like Docker, Kubernetes is a plus. Excellent communication and problem-solving skills. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Linkedin logo

At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Are you excited by the challenge of pushing the boundaries with the latest advancements in computer vision and multi-modal Large Language Models? Does the idea of working on the edge of AI research and applying it to create industry-defining software solutions resonate with you? At Nielsen Sports, we provide the most comprehensive and trusted data and analytics for the global sports ecosystem, helping clients understand media value, fan behavior, and sponsorship effectiveness. This role will place you at the forefront of this mission, architecting and implementing sophisticated AI systems that unlock novel insights from complex multimedia sports data. We are looking for Principal / Sr Principal Engineers to join us on this mission. Key Responsibilities: Technical Leadership & Architecture: Lead the design and architecture of scalable and robust AI/ML systems, particularly focusing on computer vision and LLM applications for sports media analysis Model Development & Training: Spearhead the development, training, and fine-tuning of sophisticated deep learning models (e.g., object detectors like RT-DETR, custom classifiers, generative models) on large-scale, domain-specific datasets (like sports imagery and video) Generalized Object Detection: Develop and implement advanced computer vision models capable of identifying a wide array of visual elements (e.g., logos, brand assets, on-screen graphics) in diverse and challenging sports content, including those not seen during training LLM & GenAI Integration: Explore and implement solutions leveraging LLMs and Generative AI for tasks such as content summarization, insight generation, data augmentation, and model validation (e.g., using vision models to verify detections) System Implementation & Deployment: Build and deploy production-ready AI/ML pipelines, ensuring efficiency, scalability, and maintainability. This includes developing APIs and integrating models into broader Nielsen Sports platforms UI/UX for AI Tools: Guide or contribute to the development of internal tools and simple user interfaces (using frameworks like Streamlit, Gradio, or web stacks) to showcase model capabilities, facilitate data annotation, and allow for human-in-the-loop validation Research & Innovation: Stay at the forefront of advancements in computer vision, LLMs, and related AI fields. Evaluate and prototype new technologies and methodologies to drive innovation within Nielsen Sports Mentorship & Collaboration: Mentor junior engineers, share knowledge, and collaborate effectively with cross-functional teams including product managers, data scientists, and operations Performance Optimization: Optimize model performance for speed and accuracy, and ensure efficient use of computational resources (including cloud platforms like AWS, GCP, or Azure) Data Strategy: Contribute to data acquisition, preprocessing, and augmentation strategies to enhance model performance and generalization Required Qualifications: Bachelors of Master’s or Ph.D. in Computer Science, Artificial Intelligence, Machine Learning, or a related quantitative field 5+ years (for Principal / MTS-4) / 8+ years (for Senior Principal / MTS-5) of hands-on experience in developing and deploying AI/ML models, with a strong focus on Computer Vision Proven experience in training deep learning models for object detection (e.g., YOLO, Faster R-CNN, DETR variants like RT-DETR) on custom datasets Experience in finetuning LLMs like Llama 2/3, Mistral, or open-source models available on Hugging Face using libraries such as Hugging Face Transformers, PEFT, or specialized frameworks like Axolotl/Unsloth Proficiency in Python and deep learning frameworks such as PyTorch (preferred) or TensorFlow/Keras Demonstrable experience with Multi Modal Large Language Models (LLMs) and their application, including familiarity with transformer architectures and fine-tuning techniques Experience with developing simple UIs for model interaction or data annotation (e.g., using Streamlit, Gradio, Flask/Django) Solid understanding of MLOps principles and experience with tools for model deployment, monitoring, and lifecycle management (e.g., Docker, Kubernetes, Kubeflow, MLflow) Strong software engineering fundamentals, including code versioning (Git), testing, and CI/CD practices Excellent problem-solving skills and the ability to work with complex, large-scale datasets Strong communication and collaboration skills, with the ability to convey complex technical concepts to diverse audiences Full Stack Development experience in any one stack Preferred Qualifications / Bonus Skills: Experience with Generative AI vision models for tasks like image analysis, description, or validation Track record of publications in top-tier AI/ML/CV conferences or journals Experience working with sports data (broadcast feeds, social media imagery, sponsorship analytics) Proficiency in cloud computing platforms (AWS, GCP, Azure) and their AI/ML services Experience with video processing and analysis techniques Familiarity with data pipeline and distributed computing tools (e.g., Apache Spark, Kafka) Demonstrated ability to lead technical projects and mentor team members Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

Remote

Linkedin logo

When you join Verizon You want more out of a career. A place to share your ideas freely — even if they’re daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love — driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together — lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the #VTeamLife. What you’ll be doing... We’re seeking a skilled Lead Senior Data Engineering Analyst to join our high-performing team and propel our telecom business forward. You’ll contribute to building cutting-edge data products and assets for our wireless and wireline operations, spanning areas like consumer analytics, network performance, and service assurance. In this role, you will develop deep expertise in various telecom domains. As part of the Data Architecture & Strategy team, you’ll collaborate closely with IT and business stakeholders to design and implement user-friendly, robust data product solutions. This includes defining data quality and incorporating data classification and governance principles. Your responsibilities encompass Collaborating with stakeholders to understand data requirements and translate them into efficient data models Defining the scope and purpose of data product solutions, collaborating with stakeholders to finalize project blueprints, and overseeing the design process through all phases of the release lifecycle. Designing, developing, and implementing data architecture solutions on GCP and Teradata to support our Telecom business. Designing data ingestion for both real-time and batch processing, ensuring efficient and scalable data acquisition for creating an effective data warehouse. Formulating End to End data solutions (Authoritative Data Source, Data Protection, Taxonomy Alignment) Maintaining meticulous documentation, including data design specifications, functional test cases, data lineage, and other relevant artifacts for all data product solution assets. Defining Data Architecture Strategy (Enterprise & Domain level) and Enterprise Data Model Standards & Ownership Proactively identifying opportunities for automation and performance optimization within your scope of work Collaborating effectively within a product-oriented organization, providing data expertise and solutions across multiple business units. Cultivating strong cross-functional relationships and establish yourself as a subject matter expert in data and analytics within the organization. Acting as a mentor to junior team members What we’re looking for... You’re curious about new technologies and the game-changing possibilities it creates. You like to stay up-to-date with the latest trends and apply your technical expertise to solve business problems. You thrive in a fast-paced, innovative environment working as a phenomenal teammate to drive the best results and business outcomes. You'll need to have… Bachelor’s degree or four or more years of work experience. Four or more years of relevant work experience. Four or more years of relevant work experience in data architecture, data warehousing, or a related role. Strong grasp of data architecture principles, best practices, and methodologies. Expertise in SQL for data analysis, data discovery, data profiling and solution design. Experience defining data standards, data quality and implementing industry best practices for scalable and maintainable data models using data modeling tools like Erwin Proven experience with ETL, data warehousing concepts, and the data management lifecycle Skilled in creating technical documentation, including source-to-target mappings and SLAs. Experience in shell scripting and python programming language Understanding of git version control and basic git command Hands-on experience with cloud services relevant to data engineering and architecture (e.g., BigQuery, Dataflow, Dataproc, Cloud Storage). Even better if you have one or more of the following… Master's degree in Computer Science. Experience in the Telecommunications industry, with knowledge of wireless and wireline business domains. Experience with stream-processing systems, API, Events etc. Certification in GCP-Data Engineer/Architect. Accuracy and attention to detail. Good problem solving, analytical, and research capabilities. Good verbal and written communication. Experience presenting to and influence stakeholders. Experience with large clusters, databases, BI tools, data quality and performance tuning. Experience in driving one or more smaller teams for technical delivery If Verizon and this role sound like a fit for you, we encourage you to apply even if you don’t meet every “even better” qualification listed above. #AI&D Where you’ll be working In this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager. Scheduled Weekly Hours 40 Equal Employment Opportunity Verizon is an equal opportunity employer. We evaluate qualified applicants without regard to race, gender, disability or any other legally protected characteristics. Show more Show less

Posted 4 days ago

Apply

5.0 - 9.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Greetings from TCS! TCS is hiring for Pyspark Developer Desired Experience Range : 5 to 9 years Job Location : Chennai Required Skills : PySpark, Hadoop, Big Data Responsibility of / Expectations from the Role • Minimum of 5 years of hands-on experience in designing, building, and optimizing data pipelines, data models and spark-based applications in Big Data environments. • Extensive experience and deep expertise in data modeling and data model concepts, particularly with large datasets, ensuring the design and implementation of efficient, scalable, and high-performing data models. • Strong software engineer, you take pride in what you’re developing with a strong testing ethos. • String proficiency in Python programming, with a focus on data processing and analysis • Proven experience working with PySpark for large-scale data processing and analysis • Extensive experience in designing, building, and optimizing Big Data pipelines and architectures, with a strong focus on supporting both batch and real-time data workflows. • In-Depth Knowledge of Spark, including experience with Spark performance tuning techniques to optimal processing efficiency • Strong SQL skills for querying and manipulating large datasets, with experience in optimizing complex queries for performance Regards Monisha Show more Show less

Posted 4 days ago

Apply

0 years

0 - 0 Lacs

Panaji

On-site

Education: Bachelor’s or master’s in computer science, Software Engineering, or a related field (or equivalent practical experience). Hands-On ML/AI Experience: Proven record of deploying, fine-tuning, or integrating large-scale NLP models or other advanced ML solutions. Programming & Frameworks: Strong proficiency in Python (PyTorch or TensorFlow) and familiarity with MLOps tools (e.g., Airflow, MLflow, Docker). Security & Compliance: Understanding of data privacy frameworks, encryption, and secure data handling practices, especially for sensitive internal documents. DevOps Knowledge: Comfortable setting up continuous integration/continuous delivery (CI/CD) pipelines, container orchestration (Kubernetes), and version control (Git). Collaborative Mindset: Experience working cross-functionally with technical and non-technical teams; ability to clearly communicate complex AI concepts. Role Overview Collaborate with cross-functional teams to build AI-driven applications for improved productivity and reporting. Lead integrations with hosted AI solutions (ChatGPT, Claude, Grok) for immediate functionality without transmitting sensitive data while laying the groundwork for a robust in-house AI infrastructure. Develop and maintain on-premises large language model (LLM) solutions (e.g. Llama) to ensure data privacy and secure intellectual property. Key Responsibilities LLM Pipeline Ownership: Set up, fine-tune, and deploy on-prem LLMs; manage data ingestion, cleaning, and maintenance for domain-specific knowledge bases. Data Governance & Security: Assist our IT department to implement role-based access controls, encryption protocols, and best practices to protect sensitive engineering data. Infrastructure & Tooling: Oversee hardware/server configurations (or cloud alternatives) for AI workloads; evaluate resource usage and optimize model performance. Software Development: Build and maintain internal AI-driven applications and services (e.g., automated report generation, advanced analytics, RAG interfaces, as well as custom desktop applications). Integration & Automation: Collaborate with project managers and domain experts to automate routine deliverables (reports, proposals, calculations) and speed up existing workflows. Best Practices & Documentation: Define coding standards, maintain technical documentation, and champion CI/CD and DevOps practices for AI software. Team Support & Training: Provide guidance to data analysts and junior developers on AI tool usage, ensuring alignment with internal policies and limiting model “hallucinations.” Performance Monitoring: Track AI system metrics (speed, accuracy, utilization) and implement updates or retraining as necessary. Job Types: Full-time, Permanent Pay: ₹80,000.00 - ₹90,000.00 per month Benefits: Health insurance Provident Fund Schedule: Day shift Monday to Friday Supplemental Pay: Yearly bonus Work Location: In person Application Deadline: 30/06/2025 Expected Start Date: 30/06/2025

Posted 4 days ago

Apply

8.0 years

5 - 7 Lacs

Puducherry

On-site

Job Description for Associate Database Engineer(MongoDB) Job Title: Associate Database Engineer(MongoDB) Location: Pondicherry About us: As a seasoned industry leader for 8 years in open-source database management, we specialize in providing unparalleled solutions and services for MySQL, MariaDB, MongoDB, PostgreSQL, TiDB, Cassandra, and more. At Mydbops, we are committed to providing exceptional service and building lasting relationships with our customers. Mydbops takes pride in being a PCI DSS-certified and ISO-certified company, reflecting our unwavering commitment to maintaining the highest security and operational excellence standards. Responsibilities: Monitor MongoDB databases, handle alarms, and identify root causes for performance and scalability issues. Optimize MongoDB queries and configurations for better performance. Manage and resolve support tickets within SLA guidelines. Communicate effectively with clients via messaging platforms (e.g., Slack, Skype) for database-related activities and issues. Participate in client calls regarding performance tuning and operational challenges. Manage escalations and ensure timely internal communication for resolution. Create runbooks and technical documentation to enhance team efficiency. Maintain client operations documentation for database-related activities and processes. Requirements: Strong verbal and written communication skills in English. Good understanding of MongoDB database systems and architecture. Familiarity with Linux operating systems and cloud infrastructure. Knowledge of database performance tuning and query optimization. Ability to work effectively in a fast-paced, operational environment. Strong teamwork and problem-solving abilities. Preferred Qualifications: B.Tech/M.Tech or any equivalent degree Knowledge of SQL and related database technologies. Experience with database monitoring and management tools. Certifications in MongoDB, Linux, or cloud platforms. Prior experience in customer support or technical operations roles. What We Offer: Competitive salary and benefits package. Opportunity to work with a dynamic and innovative team. Professional growth and development opportunities. Collaborative and inclusive work environment. Job Details: Job Type: full-time opportunity Work time: Rotational shift Mode of Employment - Work From Office Experience Required-1-3 years Job Type: Full-time Pay: ₹500,000.00 - ₹700,000.00 per year Benefits: Provident Fund Schedule: Rotational shift Supplemental Pay: Performance bonus Ability to commute/relocate: Pondicherry, Puducherry: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: MongoDB Administrator: 1 year (Required) Location: Pondicherry, Puducherry (Required) Shift availability: Day Shift (Required) Night Shift (Required) Overnight Shift (Required) Work Location: In person

Posted 4 days ago

Apply

8.0 years

7 - 10 Lacs

Hyderābād

On-site

Job Description Summary As an employee at Thomson Reuters, you will play a role in shaping and leading the global knowledge economy. Our technology drives global markets and helps professionals around the world make decisions that matter. As the world’s leading provider of intelligent information, we want your unique perspective to create the solutions that advance our business—and your career. About the Role As a “ Senior DevOps Engineer ” you will be responsible for building and supporting AWS infrastructure used to host a platform offering audit solutions. This engineer is constantly looking to optimize systems and services for security, automation, and performance/availability, while ensuring solutions developed adhere and align to architecture standards. This individual is responsible for ensuring that technology systems and related procedures adhere to organizational values. The person will also assist Developers with technical issues in the initiation, planning, and execution phases of projects. These activities include: the definition of needs, benefits, and technical strategy; research & development within the project life cycle; technical analysis and design; and support of operations staff in executing, testing and rolling-out the solutions. This role will be responsible for: Plan, deploy, and maintain critical business applications in prod/non-prod AWS environments Design and implement appropriate environments for those applications, engineer suitable release management procedures and provide production support Influence broader technology groups in adopting Cloud technologies, processes, and best practices Drive improvements to processes and design enhancements to automation to continuously improve production environments Maintain and contribute to our knowledge base and documentation Provide leadership, technical support, user support, technical orientation, and technical education activities to project teams and staff Manage change requests between development, staging, and production environments Provision and configure hardware, peripherals, services, settings, directories, storage, etc. in accordance with standards and project/operational requirements Perform daily system monitoring, verifying the integrity and availability of all hardware, server resources, systems and key processes, reviewing system and application logs, and verifying completion of automated processes Perform ongoing performance tuning, infrastructure upgrades, and resource optimization as required Provide Tier II support for incidents and requests from various constituencies Investigate and troubleshoot issues Research, develop, and implement innovative and where possible automated approaches for system administration tasks About you You are fit for the role of a Senior DevOps Engineering role if your background includes: Required: 8+ years at Senior DevOps Level. Knowledge of Azure / AWS cloud platform – s3, cloudfront, cloudformation, RDS, OpenSearch, Active MQ. Knowledge of CI/CD, preferably on AWS Developer tools Scripting knowledge, preferably in Python / Bash or Powershell Have contributed as a DevOps engineer responsible for planning, building and deploying cloud-based solutions Knowledge on building and deploying containers / Kubernetes. (also, exposure to AWS EKS is preferable) Knowledge on Infrastructure as code like: Bicep or Terraform, Ansible Knowledge on GitHub Action, Powershell and GitOps Nice to have: Experience with build and deploying .net core / java-based solutions Strong understanding on API first strategy Knowledge and some experience implementing testing strategy in a continuous deployment environment Have owned and operated continuous delivery / deployment. Have setup monitoring tools and disaster recovery plans to ensure business continuity. #LI-AM1 What’s in it For You? Hybrid Work Model: We’ve adopted a flexible hybrid working environment (2-3 days a week in the office depending on the role) for our office-based roles while delivering a seamless experience that is digitally and physically connected. Flexibility & Work-Life Balance: Flex My Way is a set of supportive workplace policies designed to help manage personal and professional responsibilities, whether caring for family, giving back to the community, or finding time to refresh and reset. This builds upon our flexible work arrangements, including work from anywhere for up to 8 weeks per year, empowering employees to achieve a better work-life balance. Career Development and Growth: By fostering a culture of continuous learning and skill development, we prepare our talent to tackle tomorrow’s challenges and deliver real-world solutions. Our Grow My Way programming and skills-first approach ensures you have the tools and knowledge to grow, lead, and thrive in an AI-enabled future. Industry Competitive Benefits: We offer comprehensive benefit plans to include flexible vacation, two company-wide Mental Health Days off, access to the Headspace app, retirement savings, tuition reimbursement, employee incentive programs, and resources for mental, physical, and financial wellbeing. Culture: Globally recognized, award-winning reputation for inclusion and belonging, flexibility, work-life balance, and more. We live by our values: Obsess over our Customers, Compete to Win, Challenge (Y)our Thinking, Act Fast / Learn Fast, and Stronger Together. Social Impact: Make an impact in your community with our Social Impact Institute. We offer employees two paid volunteer days off annually and opportunities to get involved with pro-bono consulting projects and Environmental, Social, and Governance (ESG) initiatives. Making a Real-World Impact: We are one of the few companies globally that helps its customers pursue justice, truth, and transparency. Together, with the professionals and institutions we serve, we help uphold the rule of law, turn the wheels of commerce, catch bad actors, report the facts, and provide trusted, unbiased information to people all over the world. About Us Thomson Reuters informs the way forward by bringing together the trusted content and technology that people and organizations need to make the right decisions. We serve professionals across legal, tax, accounting, compliance, government, and media. Our products combine highly specialized software and insights to empower professionals with the data, intelligence, and solutions needed to make informed decisions, and to help institutions in their pursuit of justice, truth, and transparency. Reuters, part of Thomson Reuters, is a world leading provider of trusted journalism and news. We are powered by the talents of 26,000 employees across more than 70 countries, where everyone has a chance to contribute and grow professionally in flexible work environments. At a time when objectivity, accuracy, fairness, and transparency are under attack, we consider it our duty to pursue them. Sound exciting? Join us and help shape the industries that move society forward. As a global business, we rely on the unique backgrounds, perspectives, and experiences of all employees to deliver on our business goals. To ensure we can do that, we seek talented, qualified employees in all our operations around the world regardless of race, color, sex/gender, including pregnancy, gender identity and expression, national origin, religion, sexual orientation, disability, age, marital status, citizen status, veteran status, or any other protected classification under applicable law. Thomson Reuters is proud to be an Equal Employment Opportunity Employer providing a drug-free workplace. We also make reasonable accommodations for qualified individuals with disabilities and for sincerely held religious beliefs in accordance with applicable law. More information on requesting an accommodation here . Learn more on how to protect yourself from fraudulent job postings here . More information about Thomson Reuters can be found on thomsonreuters.com.

Posted 4 days ago

Apply

5.0 years

8 - 10 Lacs

Hyderābād

On-site

FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate, serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. About FactSet FactSet creates flexible, open data and software solutions for over 200,000 investment professionals worldwide, providing instant access to financial data and analytics that investors use to make crucial decisions. At FactSet, our values are the foundation of everything we do. They express how we act and operate, serve as a compass in our decision-making, and play a big role in how we treat each other, our clients, and our communities. We believe that the best ideas can come from anyone, anywhere, at any time, and that curiosity is the key to anticipating our clients’ needs and exceeding their expectations. Your Team's Impact FactSet is seeking an Experienced software development engineering with proven proficiency in deployment of software adhering to best practices and with fluency in the development environment and with related tools, code libraries and systems. Responsible for the entire development process and collaborates to create a theoretical design. Demonstrated ability to critique code and production for improvement, as well as to receive and apply feedback effectively. Proven ability to maintain expected levels of productivity and increasingly becoming independent as a software developer, requiring less direct engagement and oversight on a day to day basis from one’s manager. Focus is on developing applications, testing & maintaining software, and the implementation details of development ; increasing volume of work accomplished (with consistent quality, stability and adherence to best practices), along with gaining a mastery of the products to which one is contributing and beginning to participate in forward design discussions for how to improve based on one’s observations of the code, systems and production involved. Software Developers provide project leadership and technical guidance along every stage of the software development life cycle. What You'll Do Work on the Data Lake/DAM platform handling millions of documents annually. Focus on developing new features while supporting and maintaining existing systems, ensuring the platform's continuous improvement. Participate in weekly On Call support to address urgent queries and issues in common communication channels, ensuring operational reliability and user satisfaction. Create comprehensive design documents for major architectural changes and facilitate peer reviews to ensure quality and alignment with best practices. Collaborate with product managers and key stakeholders to thoroughly understand requirements and propose strategic solutions, leveraging cross-functional insights. Actively participate in technical discussions with principal engineers and architects to support proposed design solutions, fostering a collaborative engineering environment. Work effectively as part of a geographically diverse team, coordinating with other departments and offices for seamless project progression What We're Looking For Bachelor’s or master’s degree in computer science, Engineering, or a related field is required. 5+ years of experience in software development, with a focus on Database systems handling & operations. Writing and optimizing complex SQL queries, stored procedures, views, triggers Developing and maintaining database schema and structures Creating ETL pipelines for data ingestion and transformation Troubleshooting data issues and performance bottlenecks Mentoring junior developers Proven experience working with APIs, ensuring robust connectivity and integration across the system. Working experience with AWS services such as Lambda, EC2, S3, and AWS Glue is beneficial for cloud-based operations and deployments. Strong analytical and problem-solving skills are critical for developing innovative solutions and optimizing existing platform components. Excellent collaborative and communication skills, enabling effective interaction with geographically diverse teams and key stakeholders. Capability to address system queries and provide weekly On Call support, ensuring system reliability and user satisfaction. Ability to prioritize and manage work effectively in a fast-paced environment, demonstrating self-direction and resourcefulness. Desired Skills: Deep RDBMS knowledge (e.g., SQL Server, Oracle, PostgreSQL) Strong T-SQL/PLSQL scripting Query tuning and performance optimization Data modelling and DWH concepts Often part of app development or analytics teams Stored procedures, functions, views, triggers Query optimization techniques Execution plan analysis Indexing strategies Partitioning and table optimization Logical and physical data modelling Normalization/denormalization What's In It for You At FactSet, our people are our greatest asset, and our culture is our biggest competitive advantage. Being a FactSetter means: The opportunity to join an S&P 500 company with over 45 years of sustainable growth powered by the entrepreneurial spirit of a start-up. Support for your total well-being. This includes health, life, and disability insurance, as well as retirement savings plans and a discounted employee stock purchase program, plus paid time off for holidays, family leave, and company-wide wellness days. Flexible work accommodations. We value work/life harmony and offer our employees a range of accommodations to help them achieve success both at work and in their personal lives. A global community dedicated to volunteerism and sustainability, where collaboration is always encouraged, and individuality drives solutions. Career progression planning with dedicated time each month for learning and development. Business Resource Groups open to all employees that serve as a catalyst for connection, growth, and belonging. Learn more about our benefits here . Salary is just one component of our compensation package and is based on several factors including but not limited to education, work experience, and certifications. Company Overview: FactSet (NYSE:FDS | NASDAQ:FDS) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn . Ex US: At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law. Diversity: At FactSet, we celebrate diversity of thought, experience, and perspective. We are committed to disrupting bias and a transparent hiring process. All qualified applicants will be considered for employment regardless of race, color, ancestry, ethnicity, religion, sex, national origin, gender expression, sexual orientation, age, citizenship, marital status, disability, gender identity, family status or veteran status. FactSet participates in E-Verify. Return to Work: Returning from a break? We are here to support you! If you have taken time out of the workforce and are looking to return, we encourage you to apply and chat with our recruiters about our available support to help you relaunch your career. Company Overview: FactSet (NYSE:FDS | NASDAQ:FDS) helps the financial community to see more, think bigger, and work better. Our digital platform and enterprise solutions deliver financial data, analytics, and open technology to more than 8,200 global clients, including over 200,000 individual users. Clients across the buy-side and sell-side, as well as wealth managers, private equity firms, and corporations, achieve more every day with our comprehensive and connected content, flexible next-generation workflow solutions, and client-centric specialized support. As a member of the S&P 500, we are committed to sustainable growth and have been recognized among the Best Places to Work in 2023 by Glassdoor as a Glassdoor Employees’ Choice Award winner. Learn more at www.factset.com and follow us on X and LinkedIn . At FactSet, we celebrate difference of thought, experience, and perspective. Qualified applicants will be considered for employment without regard to characteristics protected by law.

Posted 4 days ago

Apply

130.0 years

5 - 8 Lacs

Hyderābād

On-site

Job Description Manager SAP Basis The Opportunity Based in Hyderabad, join a global healthcare biopharma company and be part of a 130- year legacy of success backed by ethical integrity, forward momentum, and an inspiring mission to achieve new milestones in global healthcare. Be part of an organisation driven by digital technology and data-backed approaches that support a diversified portfolio of prescription medicines, vaccines, and animal health products. Drive innovation and execution excellence. Be a part of a team with passion for using data, analytics, and insights to drive decision-making, and which creates custom software, allowing us to tackle some of the world's greatest health threats. Our Technology Centers focus on creating a space where teams can come together to deliver business solutions that save and improve lives. An integral part of our company’s IT operating model, Tech Centers are globally distributed locations where each IT division has employees to enable our digital transformation journey and drive business outcomes. These locations, in addition to the other sites, are essential to supporting our business and strategy. A focused group of leaders in each Tech Center helps to ensure we can manage and improve each location, from investing in growth, success, and well-being of our people, to making sure colleagues from each IT division feel a sense of belonging to managing critical emergencies. And together, we must leverage the strength of our team to collaborate globally to optimize connections and share best practices across the Tech Centers. Role Overview As a SAP Basis resource, the candidate will work with a globally diverse set of teams that includes Systems, Applications, and Products (SAP) Basis, Security, Advanced Business Application Programming (ABAP), SAP functional team members, Infrastructure team and other IT process partners providing support for existing and new initiatives. The candidate will work closely with and advise the SAP Technical Architect on architectural topics and new applications / technologies to be introduced. The candidate will lead some cross-functional projects, relied upon to answer complex questions, and assists with program-wide initiatives and it is expected that the candidate can lead technical initiatives and be hands-on. What will you do in this role: Work closely with and manage the planning / execution and delivery of activities to be performed by our service provider. Establish SAP specific technical standards, technical strategy and best practices to support implementation teams by working closely with them. Support product mapping exercise based on detailed analysis of requirements and produce SAP specific solution specifications. Provide direction to Enterprise Resource Planning (ERP) Center of Excellence teams for solution realization and candidate should be able to invite self to discussions that are relevant from a technical point of view. Lead project teams by working in partnership with Project Managers. Review infrastructure requirements, system landscape design & Security integrations for required SAP modules. Evaluate SAP products offered by SAP and/or 3rd party providers, as part of ongoing and future product strategy. Able to work within and lead technical teams on issues resolution and performance tuning. Installing, managing, patching and upgrading SAP applications. SAP client and system refreshes. Understanding of DB administration along with DB replication and HA/DR processes. Perform SAP system monitoring and schedule jobs. Provide SAP product specific guidance in integration and data architecture areas. What should you have: Bachelors’ degree in Information Technology, Computer Science or any Technology stream. 4+ years of experience implementing SAP projects with hands on Basis experience on the following: SAP S/4HANA or ECC, BW, PI, Portals, Solution Manager, System upgrades, system/client refreshes, Disaster Recovery, High Availability Review infrastructure requirements, system landscape design & Security integrations for required SAP modules. Evaluate SAP products offered by SAP and/or 3rd party providers, as part of ongoing and future product strategy. Able to work within and lead technical teams on issues resolution and performance tuning. Installing, managing, patching and upgrading SAP applications. SAP client and system refreshes. Understanding of DB administration along with DB replication and HA/DR processes. Work closely with SAP, vendors, peer groups to ensure we are staying current with ERP strategy and technology. Perform SAP system monitoring and schedule jobs. SAP Basis experience working on SAP S/4HANA deployments on Cloud platforms (example: AWS, GCP or Azure). Our technology teams operate as business partners, proposing ideas and innovative solutions that enable new organizational capabilities. We collaborate internationally to deliver services and solutions that help everyone be more productive and enable innovation. Who we are: We are known as Merck & Co., Inc., Rahway, New Jersey, USA in the United States and Canada and MSD everywhere else. For more than a century, we have been inventing for life, bringing forward medicines and vaccines for many of the world's most challenging diseases. Today, our company continues to be at the forefront of research to deliver innovative health solutions and advance the prevention and treatment of diseases that threaten people and animals around the world. What we look for: Imagine getting up in the morning for a job as important as helping to save and improve lives around the world. Here, you have that opportunity. You can put your empathy, creativity, digital mastery, or scientific genius to work in collaboration with a diverse group of colleagues who pursue and bring hope to countless people who are battling some of the most challenging diseases of our time. Our team is constantly evolving, so if you are among the intellectually curious, join us—and start making your impact today. #HYDIT2025 Current Employees apply HERE Current Contingent Workers apply HERE Search Firm Representatives Please Read Carefully Merck & Co., Inc., Rahway, NJ, USA, also known as Merck Sharp & Dohme LLC, Rahway, NJ, USA, does not accept unsolicited assistance from search firms for employment opportunities. All CVs / resumes submitted by search firms to any employee at our company without a valid written search agreement in place for this position will be deemed the sole property of our company. No fee will be paid in the event a candidate is hired by our company as a result of an agency referral where no pre-existing agreement is in place. Where agency agreements are in place, introductions are position specific. Please, no phone calls or emails. Employee Status: Regular Relocation: VISA Sponsorship: Travel Requirements: Flexible Work Arrangements: Hybrid Shift: Valid Driving License: Hazardous Material(s): Required Skills: Emerging Technologies, Hiring Management, Management Process, Methods and Tools, Process Management, Program Implementation, Requirements Management, SAP HCM, Software Development, Software Development Life Cycle (SDLC), Solution Architecture, Strategic Planning, System Designs, Technical Advice Preferred Skills: Job Posting End Date: 07/9/2025 A job posting is effective until 11:59:59PM on the day BEFORE the listed job posting end date. Please ensure you apply to a job posting no later than the day BEFORE the job posting end date. Requisition ID: R351479

Posted 4 days ago

Apply

7.0 years

15 Lacs

Hyderābād

On-site

Role Overview: We are seeking a Senior GCP Cloud Engineer with strong experience in on-prem infrastructure environments and hands-on expertise in migrating and integrating workloads to Google Cloud Platform (GCP) . This role will play a key part in bridging legacy systems with modern cloud services, ensuring secure, scalable, and reliable deployments. Key Responsibilities: Design and implement GCP cloud infrastructure with a focus on integrating with and migrating from on-premises environments. Lead or support lift-and-shift , replatforming , and hybrid cloud initiatives involving compute, storage, and networking. Work with engineering, security, and infrastructure teams to assess current-state environments and create GCP-native architecture. Deploy and manage cloud resources using Terraform, Deployment Manager, or gcloud CLI . Implement secure hybrid connectivity solutions such as Cloud VPN, Interconnect, or Partner Interconnect . Support GCP IAM, service accounts, VPCs, firewalls, logging, monitoring , and cloud storage strategies . Perform system-level troubleshooting and performance tuning for GCP-hosted services. Maintain CI/CD pipelines and support automation in infrastructure provisioning. Document infrastructure designs, configurations, and cloud operations best practices. Required Skills and Qualifications: 7+ years of experience in systems engineering or cloud infrastructure. 2–3+ years of experience working with Google Cloud Platform (GCP) in production environments. Hands-on experience with on-premises infrastructure , including virtualization (VMware, Hyper-V), bare metal servers, and legacy network/storage configurations. Solid understanding of cloud-native and hybrid architectures . Experience with VM migration from on-prem to GCP using Migrate for Compute Engine, Velostrata , or other tools. Proficiency with Terraform , gcloud CLI , and shell scripting. Familiarity with Linux administration , Windows Server , and Active Directory integration with GCP . Working knowledge of DevOps practices , CI/CD pipelines , and containerization (Docker, GKE preferred). Preferred Qualifications: GCP certification (Associate Cloud Engineer or Professional Cloud Architect). Experience with other cloud platforms (AWS, Azure) is a plus. Exposure to Kubernetes , Cloud Run , Cloud Functions , and microservices architecture . Background in security hardening , compliance , or cloud governance frameworks . Soft Skills: Strong analytical, problem-solving, and documentation skills. Effective communicator and collaborator across cross-functional teams. Self-driven with the ability to lead initiatives and mentor junior engineers. Job Type: Full-time Pay: From ₹1,500,000.00 per year Experience: Google Cloud Platform: 4 years (Required) VMWare: 5 years (Preferred) Work Location: In person Speak with the employer +91 6303562768 Application Deadline: 14/06/2025 Expected Start Date: 24/06/2025

Posted 4 days ago

Apply

3.0 years

6 - 9 Lacs

Hyderābād

On-site

Application Operations Engineer Hyderabad, India Information Technology 311052 Job Description About The Role: Grade Level (for internal use): 09 The Team Diverse And Responsible Team Working On Multiple Applications And Providing Application Support In Two Shifts. Ready To Accept Challenge On Multiple Technologies And Eager For Any New Challenges. Responsibilities: Gather And Analyze Metrics From Operating Systems As Well As Applications To Assist In Performance Tuning And Fault Finding. Partner With Development Teams To Improve Services Through Rigorous Testing And Release Procedures. Participate In System Design Consulting, Platform Management, And Capacity Planning. Create Sustainable Systems And Services Through Automation. Balance Feature Development Speed And Reliability With Well-Defined Service-Level Objective Day To Day Working With Different Teams Like Infra Team For Related Issues Build And Document Automation Processes For Infrastructure As A Service/Infrastructure As Code. Backup And Patch Management RCA Of All The Issues And Deep Interest In Finding Permanent Resolution Of All Issues. Co-Ordination Of All Other Teams Involved In Issues Related With Users. Self-Driven Person What We’re Looking For: Bachelor’s Degree (Or Equivalent) In Computer Science Or Related Discipline With At Least 3+ Years Of Experience Proactive Approach To Identifying Problems, Performance Bottlenecks, And Areas For Improvement. Strong Interpersonal Skills, Analytical And Problem-Solving Ability Along With Strong Written And Verbal Communication. Ability To Communicate Ideas In Both Technical And Non-Technical Ways. A Strong Capacity For Teamwork And A Sense Of Ownership And Able To Work Independently And Be Self-Driven. Hands On Experience With Linux Server, AD, LDAP, DNS, Network Storage, AWS Compute Services (EC2, FSX, Managed AD, Route 53, Etc…) Ability To Program Using Scripting With Tools Or Languages, Such As PowerShell, Python, Ansible, Terraform And Bash Familiarity With ITSM Processes Like Incident, Problem And Change Management Using ServiceNow (Preferable) The Location: Hyderabad, India Grade: 09 {Software Engineer-Application Operations} Hybrid Model : 4 Times A Week Work From Office Is Mandatory. Shift Time: 6:30 Am To 2:30 Pm IST / 2:30 Pm To 11 Pm IST About S&P Global Ratings At S&P Global Ratings, our analyst-driven credit ratings, research, and sustainable finance opinions provide critical insights that are essential to translating complexity into clarity so market participants can uncover opportunities and make decisions with conviction. By bringing transparency to the market through high-quality independent opinions on creditworthiness, we enable growth across a wide variety of organizations, including businesses, governments, and institutions. S&P Global Ratings is a division of S&P Global (NYSE: SPGI). S&P Global is the world’s foremost provider of credit ratings, benchmarks, analytics and workflow solutions in the global capital, commodity and automotive markets. With every one of our offerings, we help many of the world’s leading organizations navigate the economic landscape so they can plan for tomorrow, today. For more information, visit www.spglobal.com/ratings What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. S&P Global has a Securities Disclosure and Trading Policy (“the Policy”) that seeks to mitigate conflicts of interest by monitoring and placing restrictions on personal securities holding and trading. The Policy is designed to promote compliance with global regulations. In some Divisions, pursuant to the Policy’s requirements, candidates at S&P Global may be asked to disclose securities holdings. Some roles may include a trading prohibition and remediation of positions when there is an effective or potential conflict of interest. Employment at S&P Global is contingent upon compliance with the Policy. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.1 - Middle Professional Tier I (EEO Job Group) Job ID: 311052 Posted On: 2025-06-10 Location: Hyderabad, Telangana, India

Posted 4 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies