Jobs
Interviews

18519 Tuning Jobs - Page 42

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA Total Experience: Minimum 7+years Educational Qualification: Bachelor’s in Engineering - Computers/Electronics/Communication or related field. Graduate/Post-Graduate in Science/Maths/IT or related streams with relevant technology experience Location: Mumbai – Goregaon East Shift – Rotational Shift What You'll Be Doing 1.1.2 PRE-REQUISITES Technical expertise on all or either of the following platform o Cisco catalyst and Nexus switching and wireless o Arista switching o HP/Aruba switching and wireless o Meraki switching and wireless o Mist wireless o Infoblox DDI Strong troubleshooting skill sets Conceptually strong in the following switching and wireless technology o HSRP/VRRP o STP/VTP o VSS/VPC o Ether-Channels, Stacking of switches (IRF, Cisco Stack) o Stand-alone AP, IAP, Flex-connects, Wireless bridge 1.1.3 Responsibilities Engineers who have a passion for providing outstanding customer service. 24x7 support of Enterprise Networks of large global clients that have a distributed LAN/Wireless/DDI setup Will be a part of a team of who are responsible for handling (switching/wireless/DDI) network operational/problem management issues. Ticket resolution - work and resolve trouble tickets, handle ticket escalation Queue Management - monitor ticket queue, ensure assignment/resolution and closure Create Method Of Procedure and/or Standard Operating Procedure document Plan and execute Change Management processes Performance Tuning of network devices and create Service Improvement Plans Plan and perform firmware upgrade Work with hardware/software vendors to resolve problems Train and mentor juniors Act as an SME SPOC for certain network products Interface with customer on calls and lead technical meetings Assist in Root Cause Analysis (RCA) Provide technical inputs for weekly/monthly customer service review reports Any additional task given to the incumbent from time-to-time based on business needs 1.1.4 TRAINING AND CERTIFICATION Cisco certification and Aruba /Juniper certification will be added advantage 1.1.5 Experience Minimum 7 years of progressive, relevant experience and proven capability to work in a complex network environment 1.1.6 EDUCATION Bachelor’s in Engineering - Computers/Electronics/Communication or related field Graduate/Post-Graduate in Science/Maths/IT or related streams with relevant technology experience 1.1.7 Other Skills Good communication skills - written as well as verbal Passion to work on core technology platform ITIL process awareness Workplace type: About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.

Posted 1 week ago

Apply

7.0 years

0 Lacs

Mumbai Metropolitan Region

On-site

Make an impact with NTT DATA Join a company that is pushing the boundaries of what is possible. We are renowned for our technical excellence and leading innovations, and for making a difference to our clients and society. Our workplace embraces diversity and inclusion – it’s a place where you can grow, belong and thrive. Your day at NTT DATA BE/BTECH with 7 years of experience in Linux administration for enterprise clients. Troubleshooting Investigate and resolve complex Linux-related issues (e.g., system crashes, performance degradation). Analyze logs (e.g., syslogs, kernel logs) to identify root causes. Utilize debugging tools (e.g., gdb, strace) to troubleshoot applications. System Administration Manage Linux systems (RHEL, CentOS, Ubuntu) across multiple environments (dev, prod). Configure and maintain Linux services (e.g., SSH, DNS, DHCP). Implement security measures (e.g., firewalls, access controls). Performance Optimization Monitor system performance using tools (e.g., top, htop, sar). Optimize system resources (CPU, memory, disk) for improved performance. Implement tuning parameters for enhanced system efficiency. Scripting and Automation Develop scripts (Bash, Python) for automation and efficiency. Security and Compliance Ensure system security and compliance with regulatory requirements. Implement security patches and updates. What You'll Be Doing Key Responsibilities: Ensures that assigned infrastructure at the client site is configured, installed, tested, and operational. Performs necessary checks, apply monitoring tools and respond to alerts. Identifies problems and errors prior to or when it occurs and logs all such incidents in a timely manner with the required level of detail. Assists in analyzing, assigning, and escalating support calls. Investigates third line support calls assigned and identify the root cause of incidents and problems. Reports and escalates issues to 3rd party vendors if necessary. Provides continuous feedback to clients and affected parties and update all systems and/or portals as prescribed by the company. Proactively identifies opportunities for work optimization including opportunities for automation of work. Coaches L2 teams for advance technical troubleshooting and behavioral skills. May manage and implement projects within technology domain, delivering effectively and promptly per client agreed upon requirements and timelines. May work on implementing and delivering Disaster Recovery functions and tests. Performs any other related task as required. Knowledge and Attributes: Ability to communicate and work across different cultures and social groups. Ability to plan activities and projects well in advance, and takes into account possible changing circumstances. Ability to maintain a positive outlook at work. Ability to work well in a pressurized environment. Ability to work hard and put in longer hours when it is necessary. Ability to apply active listening techniques such as paraphrasing the message to confirm understanding, probing for further relevant information, and refraining from interrupting. Ability to adapt to changing circumstances. Ability to place clients at the forefront of all interactions, understanding their requirements, and creating a positive client experience throughout the total client journey. Academic Qualifications and Certifications: Bachelor's degree or equivalent qualification in IT/Computing (or demonstrated equivalent work experience). Certifications relevant to the services provided (certifications carry additional weightage on a candidate’s qualification for the role). Relevant certifications include (but not limited to) - CCNP or equivalent certification. CCNP in Security or PCNSE certification or Firewall Vendor related certification is good to have along with advance technical certification like CCIE, CISSP. VMware certified Professional: Data Centre Virtualization. VMware Certified Specialist – Cloud Provider. VMware Site Recovery Manager: Install, Configure, Manage. Microsoft Certified: Azure Architect Expert. AWS Certified: Solutions Architect Associate. Veeam Certified Engineer (VMCE). Rubrik Certified Systems Administrator. Zerto, pure, vxrail. Google Cloud Platform (gcp). Oracle Cloud Infrastructure (oci). SAP Certified Technology Associate - OS DB Migration for SAP NetWeaver 7.4. SAP Technology Consultant. SAP Certified Technology Associate - SAP HANA 2.0. Oracle Cloud Infrastructure Architect Professional. IBM Certified System Administrator - WebSphere Application Server Network. Required Experience: Seasoned Managed Services experience handling complex cross technology infrastructure. Seasoned experience required in Engineering function within a medium to large ICT organization. Seasoned working knowledge of ITIL processes. Seasoned experience working with vendors and/or 3rd parties. Workplace type: On-site Working About NTT DATA NTT DATA is a $30+ billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long-term success. We invest over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure, and connectivity. We are also one of the leading providers of digital and AI infrastructure in the world. NTT DATA is part of NTT Group and headquartered in Tokyo. Equal Opportunity Employer NTT DATA is proud to be an Equal Opportunity Employer with a global culture that embraces diversity. We are committed to providing an environment free of unfair discrimination and harassment. We do not discriminate based on age, race, colour, gender, sexual orientation, religion, nationality, disability, pregnancy, marital status, veteran status, or any other protected category. Join our growing global team and accelerate your career with us. Apply today.

Posted 1 week ago

Apply

5.0 years

0 Lacs

Mohali district, India

On-site

About Evervent Evervent is a technology-driven firm operating in the Insurtech space, offering cloud-based solutions that modernize and streamline insurance workflows. We are passionate about delivering user-centric software and scalable enterprise applications that empower our clients. We are looking for an experienced and proactive Team Lead – Node.js Developer to take ownership of backend development for scalable, high-performance applications. This role requires a seasoned backend engineer who can lead a team, define best practices, and deliver robust Node.js solutions that support critical business operations. While the primary focus is backend development, experience or familiarity with React.js is a strong plus, especially when collaborating with front-end teams. Key Responsibilities * Lead a team of backend developers, ensuring quality, performance, and timely delivery of backend services. * Architect and develop efficient, reusable, and scalable backend systems using Node.js. * Design and manage secure, high-performance APIs and backend services. * Oversee database architecture, performance tuning, and data modeling (SQL/NoSQL). * Collaborate with front-end, DevOps, and product teams to align technical implementations with business goals. * Implement and maintain coding standards and documentation. * Monitor and improve system performance, scalability, and security. * Conduct code reviews, provide mentorship, and foster a high-performance engineering culture. * Stay up to date with the latest backend technologies and development trends. Skills & Qualifications * 5+ years of backend development experience, primarily with Node.js. * Proven experience in leading and mentoring backend or full-stack development teams. * Strong understanding of asynchronous programming, event-driven architecture, and API development. * Experience with RESTful APIs, microservices, and backend integration patterns. * Proficiency in database design and management (e.g., MongoDB, PostgreSQL, MySQL). * Experience with authentication, authorization, and data protection best practices. * Proficient in using Git, debugging tools, and modern development workflows. * Familiarity with Docker, Kubernetes, or other DevOps tools is a plus. * Knowledge of React.js or front-end integration concepts is a strong advantage. * Solid communication skills and ability to work in cross-functional teams. Preferred Attributes * Strong leadership, project ownership, and problem-solving capabilities. * Ability to balance hands-on coding with strategic technical oversight. * Experience in agile/scrum environments and product-oriented development. Background in the insurance or fintech sector is beneficial 

Posted 1 week ago

Apply

7.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Engineering Lead – Backend Experience: 7 - 10 Years Exp Salary: Competitive Preferred Notice Period : 15 Days Shift : 9:00 AM to 6:00 PM IST Opportunity Type: Hybrid (Mumbai) Placement Type: Permanent (*Note: This is a requirement for one of Uplers' Clients) Must have skills required : Java OR MySQL OR PHP and Laravel OR Node.js OR Python Blox.xyz (One of Uplers' Clients) is Looking for: Engineering Lead – Backend who is passionate about their work, eager to learn and grow, and who is committed to delivering exceptional results. If you are a team player, with a positive attitude and a desire to make a difference, then we want to hear from you. Role Overview Description We are looking for a passionate and hands-on Engineering Lead (Backend) to join our growing tech team. You will lead a team of talented backend engineers, driving high-impact projects, architectural decisions, and product scalability. This role is ideal for someone who loves solving complex problems, building scalable APIs, and mentoring engineers — all while contributing to real business outcomes. Key Responsibilities Lead and mentor a team of backend engineers; provide technical guidance, code reviews, and career development. Own the design, development, and delivery of robust backend systems and RESTful APIs. Collaborate with product managers, frontend engineers, and QA to deliver high-quality features. Make architectural decisions for scalability, performance, and reliability. Ensure best practices in coding, testing, CI/CD, and deployment are followed. Drive adoption of modern engineering principles, tools, and practices. Troubleshoot production issues and lead root cause analysis to improve system stability. Tech Stack Languages: Java or PHP (Laravel or Spring Boot) Databases: MySQL, MS SQL Server, Redis Cloud: AWS DevOps: Docker, CI/CD pipelines, monitoring tools (e.g., Prometheus, Grafana) Others: REST APIs, Microservices architecture, Git Requirements 7+ years of professional software development experience, with at least 2 years in a leadership or mentoring role. Deep expertise in Java or PHP stack (Laravel/Spring Boot). Proven experience in designing scalable systems and RESTful APIs. Strong understanding of data modeling, caching, performance tuning, and security. Experience with version control (Git), CI/CD, and cloud infrastructure. Ability to balance technical depth with business needs and timelines. Excellent communication and stakeholder management skills. What We Offer Competitive compensation Opportunity to revolutionise Real Estate CRM which will be used by prominent developers Opportunity to influence engineering culture and grow into a senior leadership role Flexible hybrid work environment (2 days a week from our Mumbai office) How to apply for this opportunity: Easy 3-Step Process: 1. Click On Apply! And Register or log in on our portal 2. Upload updated Resume & Complete the Screening Form 3. Increase your chances to get shortlisted & meet the client for the Interview! About Our Client: Blox is all set to change the realm of the real estate industry in India through new- age digitalization that sets the standards for the future. As India’s first technology based and consumer-centric real estate buying platform, Blox’s mission is to transform the process of buying Indian Real Estate. It's a fully integrated online system that will address pain points in the real estate market and make transactions in the space seamless. About Uplers: Our goal is to make hiring and getting hired reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant product and engineering job opportunities and progress in their career. (Note: There are many more opportunities apart from this on the portal.) So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Your potential, unleashed. India’s impact on the global economy has increased at an exponential rate and Deloitte presents an opportunity to unleash and realise your potential amongst cutting edge leaders, and organisations shaping the future of the region, and indeed, the world beyond. At Deloitte, your whole self to work, every day. Combine that with our drive to propel with purpose and you have the perfect playground to collaborate, innovate, grow, and make an impact that matters. The team Deloitte helps organizations prevent cyberattacks and protect valuable assets. We believe in being secure, vigilant, and resilient—not only by looking at how to prevent and respond to attacks, but at how to manage cyber risk in a way that allows you to unleash new opportunities. Embed cyber risk at the start of strategy development for more effective management of information and technology risks Work you’ll do Splunk Engineer role is to Administering customer’s Splunk Enterprise Security (SIEM) end to end environment. This includes use case development, log source onboarding, custom parser creation, troubleshoot Splunk issues, upgrading the Splunk environment. The key skills required are as follows: Demonstrates proven expertise as in administering Splunk Enterprise Security (SIEM) environment. Should have the following skills: • Overall experience of at least 3+ years as SIEM Splunk Enterprise Security • Splunk Certified professional having at least Splunk Admin user certification level preferrable. • Good experience in Splunk administration and troubleshooting • Experience in integration of Splunk with log sources of different types including but not limited to security devices, network devices, web applications, custom applications and so on. • Experience in tuning and troubleshooting Splunk premium apps like Enterprise Security, Phantom and UBA. • Comfortable in writing regular expression to extract fields from custom log sources. • Expertise in developing custom use cases using Splunk search language to correlate and alert on logs from multiple sources. • Hands-on experience in creating dashboard and reports using SPL queries and XML. • Good knowledge of information security and IT operations domain. • Proficiency in client and server operating systems including Linux and Windows • General networking and system troubleshooting skills (firewalls, routing, NAT, etc.) • Ability to autonomously prioritize and successfully deliver across a portfolio of projects • Good consulting skills with ability to manage client expectations Your role as AM, DM We expect our people to embrace and live our purpose by challenging themselves to identify issues that are most important for our clients, our people, and for society. In addition to living our purpose, Senior Executive across our organization must strive to be: Inspiring - Leading with integrity to build inclusion and motivation Committed to creating purpose - Creating a sense of vision and purpose Agile - Achieving high-quality results through collaboration and Team unity Skilled at building diverse capability - Developing diverse capabilities for the future Persuasive / Influencing - Persuading and influencing stakeholders Collaborating - Partnering to build new solutions Delivering value - Showing commercial acumen Committed to expanding business - Leveraging new business opportunities Analytical Acumen - Leveraging data to recommend impactful approach and solutions through the power of analysis and visualization Effective communication – Must be well abled to have well-structured and well-articulated conversations to achieve win-win possibilities Engagement Management / Delivery Excellence - Effectively managing engagement(s) to ensure timely and proactive execution as well as course correction fo the success of engagement(s) Managing change - Responding to changing environment with resilience Managing Quality & Risk - Delivering high quality results and mitigating risks with utmost integrity and precision Strategic Thinking & Problem Solving - Applying strategic mindset to solve business issues and complex problems Tech Savvy - Leveraging ethical technology practices to deliver high impact for clients and for Deloitte Empathetic leadership and inclusivity - creating a safe and thriving environment where everyone's valued for who they are, use empathy to understand others to adapt our behaviours and attitudes to become more inclusive. How you’ll grow Connect for impact Our exceptional team of professionals across the globe are solving some of the world’s most complex business problems, as well as directly supporting our communities, the planet, and each other. Know more in our Global Impact Report and our India Impact Report. Empower to lead You can be a leader irrespective of your career level. Our colleagues are characterised by their ability to inspire, support, and provide opportunities for people to deliver their best and grow both as professionals and human beings. Know more about Deloitte and our One Young World partnership. Inclusion for all At Deloitte, people are valued and respected for who they are and are trusted to add value to their clients, teams and communities in a way that reflects their own unique capabilities. Know more about everyday steps that you can take to be more inclusive. At Deloitte, we believe in the unique skills, attitude and potential each and every one of us brings to the table to make an impact that matters. Drive your career At Deloitte, you are encouraged to take ownership of your career. We recognise there is no one size fits all career path, and global, cross-business mobility and up / re-skilling are all within the range of possibilities to shape a unique and fulfilling career. Know more about Life at Deloitte. Everyone’s welcome… entrust your happiness to us Our workspaces and initiatives are geared towards your 360-degree happiness. This includes specific needs you may have in terms of accessibility, flexibility, safety and security, and caregiving. Here’s a glimpse of things that are in store for you. Interview tips We want job seekers exploring opportunities at Deloitte to feel prepared, confident and comfortable. To help you with your interview, we suggest that you do your research, know some background about the organisation and the business area you’re applying to. Check out recruiting tips from Deloitte professionals. *Caution against fraudulent job offers*: We would like to advise career aspirants to exercise caution against fraudulent job offers or unscrupulous practices. At Deloitte, ethics and integrity are fundamental and not negotiable. We do not charge any fee or seek any deposits, advance, or money from any career aspirant in relation to our recruitment process. We have not authorized any party or person to collect any money from career aspirants in any form whatsoever for promises of getting jobs in Deloitte or for being considered against roles in Deloitte. We follow a professional recruitment process, provide a fair opportunity to eligible applicants and consider candidates only on merit. No one other than an authorized official of Deloitte is permitted to offer or confirm any job offer from Deloitte. We advise career aspirants to exercise caution. In this regard, you may refer to a more detailed advisory given on our website at: https://www2.deloitte.com/in/en/careers/advisory-for-career-aspirants.html?icid=wn_

Posted 1 week ago

Apply

4.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Job Description: We are seeking a highly motivated and enthusiastic Senior Data Scientist with over 4 years of experience to join our dynamic team. The ideal candidate will have a strong background in AI/ML analytics and a passion for leveraging data to drive business insights and innovation. Key Responsibilities: Develop and implement machine learning models and algorithms. Work closely with project stakeholders to understand requirements and translate them into deliverables. Utilize statistical and machine learning techniques to analyze and interpret complex data sets. Stay updated with the latest advancements in AI/ML technologies and methodologies. Collaborate with cross-functional teams to support various AI/ML initiatives. Qualifications: Bachelor’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field. Strong understanding of machine learning , deep learning and Generative AI concepts. Preferred Skills: Experience in machine learning techniques such as Regression, Classification, Predictive modeling, Clustering, Deep Learning stack, NLP using python Strong knowledge and experience in Generative AI/ LLM based development. Strong experience working with key LLM models APIs (e.g. AWS Bedrock, Azure Open AI/ OpenAI) and LLM Frameworks (e.g. LangChain, LlamaIndex). Experience with cloud infrastructure for AI/Generative AI/ML on AWS, Azure. Expertise in building enterprise grade, secure data ingestion pipelines for unstructured data – including indexing, search, and advance retrieval patterns. Knowledge of effective text chunking techniques for optimal processing and indexing of large documents or datasets. Proficiency in generating and working with text embeddings with understanding of embedding spaces and their applications in semantic search and information. retrieval. Experience with RAG concepts and fundamentals (VectorDBs, AWS OpenSearch, semantic search, etc.), Expertise in implementing RAG systems that combine knowledge bases with Generative AI models. Knowledge of training and fine-tuning Foundation Models (Athropic, Claud , Mistral, etc.), including multimodal inputs and outputs. Proficiency in Python, TypeScript, NodeJS, ReactJS (and equivalent) and frameworks. (e.g., pandas, NumPy, scikit-learn), Glue crawler, ETL Experience with data visualization tools (e.g., Matplotlib, Seaborn, Quicksight). Knowledge of deep learning frameworks (e.g., TensorFlow, Keras, PyTorch). Experience with version control systems (e.g., Git, CodeCommit). Good to have Skills Knowledge and Experience in building knowledge graphs in production. Understanding of multi-agent systems and their applications in complex problem-solving scenarios.

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Overview We are seeking a skilled EMS System Engineer to join our dynamic team. The ideal candidate will have hands-on experience with GE eTerra Energy Management System (EMS) installations, configuration, and maintenance on Linux environments. You will play a critical role in ensuring seamless installation, validation, troubleshooting, and ongoing support of the EMS platform. Additionally, expertise in scripting and automation using Ansible is good to have for streamlining processes like installation, patching, and system maintenance. About the Role The EMS System Engineer will be responsible for the installation, configuration, validation, and support of the GE eTerra EMS systems on Linux platforms. Responsibilities System Installation & Configuration Install and configure GE eTerra EMS systems on Linux platforms. Test and validate access to eTerra Habitat to ensure proper functionality. Address and resolve any issues during or after installation with quick fixes. System Validation & Access Management Validate system configurations and ensure appropriate data sources are accessible. Manage access controls and permissions to maintain system security. Support & Maintenance Provide support to troubleshoot and resolve system issues. Support infrastructure patching and apply 3rd-party patches, including tools like Oracle Client. Conduct regression testing after patching to ensure system stability and performance. Automation & Scripting Develop and maintain automation scripts using Ansible to streamline system installations, patching, and maintenance tasks. Continuously improve automation processes to enhance efficiency and reduce manual intervention. Qualifications Hands-on system engineering experience with GE eTerra EMS, including installation, configuration, and validation on Linux systems. Strong expertise in Linux systems administration. Proven experience in testing and resolving access and configuration issues in EMS systems. Familiarity with data source configuration and validation. Experience in managing infrastructure and third-party patching, including tools like Oracle, tomcat etc. Demonstrated experience with production support and handling system-level issues in production environments. Strong troubleshooting skills and the ability to quickly identify and resolve technical issues. Preferred Skills Proficiency in Ansible scripting for automation of installations, patching, and maintenance. Familiarity with other automation tools or scripting languages like Python or Shell scripting. Experience with system monitoring and performance tuning for EMS platforms. Knowledge of network security and best practices for access management. Understanding of power systems and EMS workflows is a plus. Please share your CV at jitendra.kinkar@talentcorner.in

Posted 1 week ago

Apply

6.0 years

18 - 30 Lacs

India

On-site

Role: Senior Database Administrator (DevOps) Experience: 7+ Type: Contract Job Summary We are seeking a highly skilled and experienced Database Administrator with a minimum of 6 years of hands-on experience managing complex, high-performance, and secure database environments. This role is pivotal in maintaining and optimizing our multi-platform database infrastructure , which includes PostgreSQL, MariaDB/MySQL, MongoDB, MS SQL Server , and AWS RDS/Aurora instances. You will be working primarily within Linux-based production systems (e.g., RHEL 9.x) and will play a vital role in collaborating with DevOps, Infrastructure, and Data Engineering teams to ensure seamless database performance across environments. The ideal candidate has strong experience with infrastructure automation tools like Terraform and Ansible , is proficient with Docker , and is well-versed in cloud environments , particularly AWS . This is a critical role where your efforts will directly impact system stability, scalability, and security across all environments. Key Responsibilities Design, deploy, monitor, and manage databases across production and staging environments. Ensure high availability, performance, and data integrity for mission-critical systems. Automate database provisioning, configuration, and maintenance using Terraform and Ansible. Administer Linux-based systems for database operations with an emphasis on system reliability and uptime. Establish and maintain monitoring systems, set up proactive alerts, and rapidly respond to performance issues or incidents. Work closely with DevOps and Data Engineering teams to integrate infrastructure with MLOps and CI/CD pipelines. Implement and enforce database security best practices, including data encryption, user access control, and auditing. Conduct root cause analysis and tuning to continuously improve database performance and reduce downtime. Required Technical Skills Database Expertise: PostgreSQL: Advanced skills in replication, tuning, backup/recovery, partitioning, and logical/physical architecture. MariaDB/MySQL: Proven experience in high availability configurations, schema optimization, and performance tuning. MongoDB: Strong understanding of NoSQL structures, including indexing strategies, replica sets, and sharding. MS SQL Server: Capable of managing and maintaining enterprise-grade MS SQL Server environments. AWS RDS & Aurora: Deep familiarity with provisioning, monitoring, auto-scaling, snapshot management, and failover handling. Infrastructure & DevOps 6+ years of experience as a Database Administrator or DevOps Engineer in Linux-based environments. Hands-on expertise with Terraform, Ansible, and Infrastructure as Code (IaC) best practices. Knowledge of networking principles, firewalls, VPCs, and security hardening. Experience with monitoring tools such as Datadog, Splunk, SignalFx, and PagerDuty for observability and alerting. Strong working experience with AWS Cloud Services (EC2, VPC, IAM, CloudWatch, S3, etc.). Exposure to other cloud providers like GCP, Azure, or IBM Cloud is a plus. Familiarity with Docker, container orchestration, and integrating databases into containerized environments. Preferred Qualifications Excellent analytical and troubleshooting skills. Strong verbal and written communication skills. Ability to collaborate in cross-functional teams and drive initiatives independently. A passion for automation, observability, and scalability in production-grade environments. Must Have: AWS, Ansible, DevOps, Terraform Skills: postgresql,mariadb,datadog,containerization,networking,linux,mongodb,devops,terraform,aws aurora,cloud services,amazon web services (aws),ms sql server,ansible,aws,mysql,aws rds,docker,infrastructure,database

Posted 1 week ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

About Position: We are conducting an in-person hiring drive for the position of Mlops / Data Science in Pune & Bengaluru on 2nd August 2025.Interview Location is mentioned below: Pune – Persistent Systems, Veda Complex, Rigveda-Yajurveda-Samaveda-Atharvaveda Plot No. 39, Phase I, Rajiv Gandhi Information Technology Park, Hinjawadi, Pune, 411057 Bangalore - Persistent Systems, The Cube at Karle Town Center Rd, DadaMastan Layout, Manayata Tech Park, Nagavara, Bengaluru, Karnataka 560024 We are looking for an experienced and talented Data Science to join our growing data competency team. The ideal candidate will have a strong background in working with GEN AI , ML ,LangChain, LangGraph, Mlops Architecture Strategy, Prompt engineering. You will work closely with our data analysts, engineers, and business teams to ensure optimal performance, scalability, and availability of our data pipelines and analytics. Role: Mlops, Data Science Job Location: All PSL Location Experience: 5+ Years Job Type: Full Time Employment What You'll Do: Design, build, and manage scalable ML model deployment pipelines (CI/CD for ML). Automate model training, validation, monitoring, and retraining workflows. Implement model governance, versioning, and reproducibility best practices. Collaborate with data scientists, engineers, and product teams to operationalize ML solutions. Ensure robust monitoring and performance tuning of deployed models Expertise You'll Bring: Strong experience with MLOps tools & frameworks (MLflow, Kubeflow, SageMaker, Vertex AI, etc.). Proficient in containerization (Docker, Kubernetes). Good knowledge of cloud platforms (AWS, Azure, or GCP). Expertise in Python and familiarity with ML libraries (TensorFlow, PyTorch, scikit-learn). Solid understanding of CI/CD, infrastructure as code, and automation tools. Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Are you ready to join a world leader in the exciting and dynamic fields of the Pharmaceutical and Medical Device industries? PQE Group has been at the forefront of these industries since 1998, with 40 subsidiaries and more than 2000 employees in Europe, Asia, and the Americas. Due to our constant growth, we are currently looking for a Data Scientist to support our projects in Hyderabad, India or Chandigarh, India . What you’ll do: · Design, train, and deploy Machine Learning and NLP models (NER, classification, embeddings) using Large Language Models (LLMs) and BERT-like architectures. · Implement Retrieval-Augmented Generation (RAG) pipelines and LLM-based AI agents for real-time use cases. · Build interactive dashboards with Streamlit and lightweight web interfaces (HTML/CSS/JavaScript, Laravel or similar frameworks) to visualise insights and create rapid prototypes. · Define, manage, and optimise relational databases (e.g. MySQL, PostgreSQL) and vector databases (e.g. Pinecone, Weaviate, FAISS) for efficient embedding retrieval. · Collaborate with Product and DevOps teams on integration and deployment. · Document code, service prototypes, and architectural decisions clearly and for future reuse. Must-have requirements: · 3+ years of professional experience in Data Science, Machine Learning, or AI development. · Strong command of Python and key libraries (pandas, scikit-learn, PyTorch/TensorFlow, LangChain or similar). · Hands-on knowledge of LLMs (fine-tuning, prompt engineering, evaluation). · Experience with RAG systems and/or AI agents (e.g. ReAct, Auto-GPT, CrewAI). · Experience managing relational databases and designing vector databases . · Deep understanding of NLP and NER models (BERT, RoBERTa, spaCy, Hugging Face Transformers). · Experience with workflow orchestrators ( Airflow, Prefect ) or MLOps infrastructures. · Familiarity with containerisation (Docker) and CI/CD pipelines . · Solid statistical foundation and background in probabilistic models . · Proficiency with Streamlit for rapid prototyping of data-driven apps. · Front-end development skills (HTML/CSS/JavaScript; familiarity with React/Vite or similar is appreciated). · Comfortable with Git version control and working in Agile teams. · Fluent written and spoken English . Next Steps Upon receiving your application, if a match is found, the Recruiting department will contact you for an initial Talent Acquisition interview. If there's a positive match, a technical interview with the Hiring Manager will be arranged. In the case of a positive feedback coming from the Hiring Manager interview, the recruiter will contact you for the next steps or to discuss our proposal. Alternatively, if the feedback is negative, we will contact you to halt the recruitment process. Working at PQE Group As a member of the PQE team, you will be part of a challenging, multicultural company that values collaboration and innovation. PQE Group gives you the opportunity to work on international projects, improve your skills and interact with colleagues from all corners of the world. If you are looking for a rewarding and exciting career, PQE Group is the perfect place for you. Apply now and take the first step towards an amazing future with us.

Posted 1 week ago

Apply

0.0 - 3.0 years

5 - 7 Lacs

Visakhapatnam, Andhra Pradesh

On-site

Role Overview: We are seeking a talented and detail-oriented Database Developer with 2+ years of experience to design, develop, and maintain scalable database solutions. The ideal candidate should have a strong command over SQL and be experienced in writing efficient queries, stored procedures, and working with data models to support application and reporting needs. Key Responsibilities: Write and optimize SQL queries, stored procedures, functions, views, and triggers Design and maintain normalized and denormalized data models Develop and maintain ERP Processes Analyze existing queries for performance improvements and suggest indexing strategies Work closely with application developers and analysts to understand data requirements Ensure data integrity and consistency across development, staging, and production environments Create and maintain technical documentation related to database structures, processes, and queries Generate and support custom reports and dashboards (using tools like Superset Etc) Participate in data migration and integration efforts between systems or platforms Work with large datasets and ensure optimal data processing and storage Required Skills: Strong hands-on experience with SQL Server, MySQL, PostgreSQL. Proficiency in writing complex SQL queries, stored procedures, and data transformations Understanding of relational database concepts, data modeling, and indexing Knowledge of performance tuning techniques (joins, temp tables, query plans) Familiarity with ERP tools or scripting . Preferred Qualifications: Bachelor’s degree in Computer Science, Information Systems, or related field MS SQL, good to have dot net. knowledge on WMS or MEW or manufacturing ERP experience. Knowledge of basic database security, transactions, and locking mechanisms Exposure to cloud-based databases Experience with version control (Git), Agile methodologies, or similar tools Nice to Have: Experience working in domains like retail, supply chain, Warehouse , healthcare, or e-commerce Send Resume to : sowmya.chintada@inventrax.com (or) janardhan.tanakala@inventrax.com· Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹700,000.00 per year Benefits: Provident Fund Experience: total work: 3 years (Preferred) Location: Visakhapatnam, Andhra Pradesh (Preferred) Work Location: In person

Posted 1 week ago

Apply

5.0 years

45 Lacs

Mysore, Karnataka, India

Remote

Experience : 5.00 + years Salary : INR 4500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Portcast) What do you need for this opportunity? Must have skills required: Spark, Generative AI models, LLM, rag, AWS, Docker, GCP, Kafka, Kubernetes, Machine Learning, Python, SQL Portcast is Looking for: About the role: We are looking for a Senior Machine Learning Engineer who specializes in deploying ML and AI models into production. You will handle the full lifecycle—from research and model building to deployment and scaling in real-world environments. This hands-on role requires designing robust algorithms that address key business problems, particularly in visibility, prediction, demand forecasting, and freight audit. Your focus will be on ensuring model accuracy, reliability, and scalability in production systems. What You’ll Do: Develop and deploy machine learning models from initial research to production, ensuring scalability and performance in live environments Own the end-to-end ML pipeline, including data processing, model development, testing, deployment, and continuous optimization Design and implement machine learning algorithms that address key business problems that our product focuses on in visibility, prediction, demand forecasting and freight audit Ensure reliable and scalable ML infrastructure, automating deployment and monitoring processes using MLOps best practices Perform feature engineering, model tuning, and validation to ensure that models are production-ready and optimized for performance Build, test, and deploy real-time prediction models, maintaining version control and performance tracking To thrive in this role, you must have: Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or a related field 5+ years of experience in building, deploying, and scaling machine learning models in production environments Experience deploying Generative AI models in production environments, with a strong understanding of Retrieval-Augmented Generation (RAG), AI Agents, and expertise in prompt engineering techniques Proven experience with the full product lifecycle, taking models from R&D to deployment in fast-paced environments Experience working in a product-based company, preferably within a startup environment with early-stage technical product development Strong expertise in Python and SQL, along with experience in cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes) Experience with real-time data processing, anomaly detection, and time-series forecasting in production Experience working with large datasets and big data technologies like Spark and Kafka to build scalable solutions First-principles thinking and excellent problem-solving skills, with a proactive approach to addressing challenges A self-starter mentality, with the ability to take ownership of projects from end to end and work autonomously to drive results Excellent communication skills, with the ability to convey complex technical concepts and a strong customer-obsessed mindset Engagement Type: Direct-hire Job Type: Permanent Location: Remote Working time: 9:00 AM to 6:00 PM IST 5 rounds 15 mins - HR screening call with G 30 mins - Interview with HM 3-5 days- Take Assignment 30 mins - Tech panel interview 30 mins - CEO interview (cultural fit round) How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

5.0 years

45 Lacs

Patna, Bihar, India

Remote

Experience : 5.00 + years Salary : INR 4500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Portcast) What do you need for this opportunity? Must have skills required: Spark, Generative AI models, LLM, rag, AWS, Docker, GCP, Kafka, Kubernetes, Machine Learning, Python, SQL Portcast is Looking for: About the role: We are looking for a Senior Machine Learning Engineer who specializes in deploying ML and AI models into production. You will handle the full lifecycle—from research and model building to deployment and scaling in real-world environments. This hands-on role requires designing robust algorithms that address key business problems, particularly in visibility, prediction, demand forecasting, and freight audit. Your focus will be on ensuring model accuracy, reliability, and scalability in production systems. What You’ll Do: Develop and deploy machine learning models from initial research to production, ensuring scalability and performance in live environments Own the end-to-end ML pipeline, including data processing, model development, testing, deployment, and continuous optimization Design and implement machine learning algorithms that address key business problems that our product focuses on in visibility, prediction, demand forecasting and freight audit Ensure reliable and scalable ML infrastructure, automating deployment and monitoring processes using MLOps best practices Perform feature engineering, model tuning, and validation to ensure that models are production-ready and optimized for performance Build, test, and deploy real-time prediction models, maintaining version control and performance tracking To thrive in this role, you must have: Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or a related field 5+ years of experience in building, deploying, and scaling machine learning models in production environments Experience deploying Generative AI models in production environments, with a strong understanding of Retrieval-Augmented Generation (RAG), AI Agents, and expertise in prompt engineering techniques Proven experience with the full product lifecycle, taking models from R&D to deployment in fast-paced environments Experience working in a product-based company, preferably within a startup environment with early-stage technical product development Strong expertise in Python and SQL, along with experience in cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes) Experience with real-time data processing, anomaly detection, and time-series forecasting in production Experience working with large datasets and big data technologies like Spark and Kafka to build scalable solutions First-principles thinking and excellent problem-solving skills, with a proactive approach to addressing challenges A self-starter mentality, with the ability to take ownership of projects from end to end and work autonomously to drive results Excellent communication skills, with the ability to convey complex technical concepts and a strong customer-obsessed mindset Engagement Type: Direct-hire Job Type: Permanent Location: Remote Working time: 9:00 AM to 6:00 PM IST 5 rounds 15 mins - HR screening call with G 30 mins - Interview with HM 3-5 days- Take Assignment 30 mins - Tech panel interview 30 mins - CEO interview (cultural fit round) How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

5.0 years

45 Lacs

Thiruvananthapuram, Kerala, India

Remote

Experience : 5.00 + years Salary : INR 4500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Portcast) What do you need for this opportunity? Must have skills required: Spark, Generative AI models, LLM, rag, AWS, Docker, GCP, Kafka, Kubernetes, Machine Learning, Python, SQL Portcast is Looking for: About the role: We are looking for a Senior Machine Learning Engineer who specializes in deploying ML and AI models into production. You will handle the full lifecycle—from research and model building to deployment and scaling in real-world environments. This hands-on role requires designing robust algorithms that address key business problems, particularly in visibility, prediction, demand forecasting, and freight audit. Your focus will be on ensuring model accuracy, reliability, and scalability in production systems. What You’ll Do: Develop and deploy machine learning models from initial research to production, ensuring scalability and performance in live environments Own the end-to-end ML pipeline, including data processing, model development, testing, deployment, and continuous optimization Design and implement machine learning algorithms that address key business problems that our product focuses on in visibility, prediction, demand forecasting and freight audit Ensure reliable and scalable ML infrastructure, automating deployment and monitoring processes using MLOps best practices Perform feature engineering, model tuning, and validation to ensure that models are production-ready and optimized for performance Build, test, and deploy real-time prediction models, maintaining version control and performance tracking To thrive in this role, you must have: Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or a related field 5+ years of experience in building, deploying, and scaling machine learning models in production environments Experience deploying Generative AI models in production environments, with a strong understanding of Retrieval-Augmented Generation (RAG), AI Agents, and expertise in prompt engineering techniques Proven experience with the full product lifecycle, taking models from R&D to deployment in fast-paced environments Experience working in a product-based company, preferably within a startup environment with early-stage technical product development Strong expertise in Python and SQL, along with experience in cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes) Experience with real-time data processing, anomaly detection, and time-series forecasting in production Experience working with large datasets and big data technologies like Spark and Kafka to build scalable solutions First-principles thinking and excellent problem-solving skills, with a proactive approach to addressing challenges A self-starter mentality, with the ability to take ownership of projects from end to end and work autonomously to drive results Excellent communication skills, with the ability to convey complex technical concepts and a strong customer-obsessed mindset Engagement Type: Direct-hire Job Type: Permanent Location: Remote Working time: 9:00 AM to 6:00 PM IST 5 rounds 15 mins - HR screening call with G 30 mins - Interview with HM 3-5 days- Take Assignment 30 mins - Tech panel interview 30 mins - CEO interview (cultural fit round) How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

5.0 years

45 Lacs

Vijayawada, Andhra Pradesh, India

Remote

Experience : 5.00 + years Salary : INR 4500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Portcast) What do you need for this opportunity? Must have skills required: Spark, Generative AI models, LLM, rag, AWS, Docker, GCP, Kafka, Kubernetes, Machine Learning, Python, SQL Portcast is Looking for: About the role: We are looking for a Senior Machine Learning Engineer who specializes in deploying ML and AI models into production. You will handle the full lifecycle—from research and model building to deployment and scaling in real-world environments. This hands-on role requires designing robust algorithms that address key business problems, particularly in visibility, prediction, demand forecasting, and freight audit. Your focus will be on ensuring model accuracy, reliability, and scalability in production systems. What You’ll Do: Develop and deploy machine learning models from initial research to production, ensuring scalability and performance in live environments Own the end-to-end ML pipeline, including data processing, model development, testing, deployment, and continuous optimization Design and implement machine learning algorithms that address key business problems that our product focuses on in visibility, prediction, demand forecasting and freight audit Ensure reliable and scalable ML infrastructure, automating deployment and monitoring processes using MLOps best practices Perform feature engineering, model tuning, and validation to ensure that models are production-ready and optimized for performance Build, test, and deploy real-time prediction models, maintaining version control and performance tracking To thrive in this role, you must have: Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or a related field 5+ years of experience in building, deploying, and scaling machine learning models in production environments Experience deploying Generative AI models in production environments, with a strong understanding of Retrieval-Augmented Generation (RAG), AI Agents, and expertise in prompt engineering techniques Proven experience with the full product lifecycle, taking models from R&D to deployment in fast-paced environments Experience working in a product-based company, preferably within a startup environment with early-stage technical product development Strong expertise in Python and SQL, along with experience in cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes) Experience with real-time data processing, anomaly detection, and time-series forecasting in production Experience working with large datasets and big data technologies like Spark and Kafka to build scalable solutions First-principles thinking and excellent problem-solving skills, with a proactive approach to addressing challenges A self-starter mentality, with the ability to take ownership of projects from end to end and work autonomously to drive results Excellent communication skills, with the ability to convey complex technical concepts and a strong customer-obsessed mindset Engagement Type: Direct-hire Job Type: Permanent Location: Remote Working time: 9:00 AM to 6:00 PM IST 5 rounds 15 mins - HR screening call with G 30 mins - Interview with HM 3-5 days- Take Assignment 30 mins - Tech panel interview 30 mins - CEO interview (cultural fit round) How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism SAP Management Level Manager Job Description & Summary At PwC, our people in business application consulting specialise in consulting services for a variety of business applications, helping clients optimise operational efficiency. These individuals analyse client needs, implement software solutions, and provide training and support for seamless integration and utilisation of business applications, enabling clients to achieve their strategic objectives. As a SAP consulting generalist at PwC, you will focus on providing consulting services across various SAP applications to clients, analysing their needs, implementing software solutions, and offering training and support for effective utilisation of SAP applications. Your versatile knowledge will allow you to assist clients in optimising operational efficiency and achieving their strategic objectives. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities SAP RAP Development: Design and develop applications using RAP with ABAP on SAP S/4HANA or SAP BTP ABAP Environment. Create and manage CDS views, behavior definitions, and service definitions. Implement business logic using ABAP classes and interfaces. Ensure adherence to clean core principles and extensibility guidelines. SAP BTP Integration: Develop and deploy applications on SAP BTP using services like: SAP HANA Cloud SAP Integration Suite SAP Event Mesh SAP Launchpad Service Utilize SAP Business Application Studio (BAS) for development and deployment. Integrate RAP applications with Fiori/UI5 front-end and other SAP modules. Cloud & DevOps Practices: Implement CI/CD pipelines for SAP BTP applications. Monitor and optimize application performance and scalability. Ensure security and compliance in cloud deployments. Project & Support Activities: Participate in end-to-end project lifecycle including design, development, testing, and deployment. Collaborate with functional consultants, architects, and business stakeholders. Provide technical documentation and user training. Troubleshoot and resolve issues in RAP and BTP environments. standard SAP Fiori apps as per business requirements. Develop and maintain OData services using SAP Gateway. Ensure responsive and intuitive UI/UX design across devices. Integration & Backend Connectivity: Integrate Fiori apps with SAP modules (e.g., MM, SD, PP, FI). Collaborate with ABAP developers for backend logic and data provisioning. Troubleshoot and resolve issues related to Fiori app performance and data flow. Project & Support Activities: Participate in SAP implementation, upgrade, and migration projects. Provide technical support and maintenance for existing Fiori applications. Conduct unit testing, performance tuning, and documentation. Train end-users and provide post-go-live support. Implement SAP PM solutions including Preventive, Corrective, and Predictive Maintenance. Configure master data: Functional Locations, Equipment, Task Lists, Maintenance Plans, BOMs, Measuring Points, and Work Centers. Customize notification and work order processes. Integrate SAP PM with other modules like MM, PP, QM, and FICO. 2. Business Process Analysis Conduct workshops to gather business requirements. Analyze current maintenance processes and identify areas for improvement. Design and document functional specifications and blueprints. 3. Support & Optimization Provide day-to-day support for SAP PM users. Troubleshoot and resolve system issues and bugs. Monitor system performance and suggest enhancements. Train end-users and create user manuals. 4. Project Management Lead or support SAP PM-related projects and rollouts. Coordinate with cross-functional teams and external vendors. Ensure project deliverables meet quality standards and deadlines. 5. Reporting & Compliance Develop and maintain KPIs and reports for maintenance performance. Ensure compliance with internal controls and audit requirements. Support Zero-Based Budgeting (ZBB) and cost tracking for maintenance activities. Mandatory Skill Sets SAP RAP + BTP Preferred Skill Sets SAP RAP + BTP Years Of Experience Required 7+ Yrs Education Qualification Btech MBA MCA MTECH Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Engineering, Bachelor of Engineering, Master of Business Administration Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills SAP Business Technology Platform (SAP BTP) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Thinking, Application Software, Business Model Development, Business Process Modeling, Business Systems, Coaching and Feedback, Communication, Creativity, Developing Training Materials, Embracing Change, Emerging Technologies, Emotional Regulation, Empathy, Enterprise Integration, Enterprise Software, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Innovative Design, Intellectual Curiosity, IT Infrastructure {+ 23 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 week ago

Apply

1.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Operations Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in intelligent automation at PwC will focus on conducting process mining, designing next generation small- and large-scale automation solutions, and implementing intelligent process automation, robotic process automation and digital workflow solutions to help clients achieve operational efficiencies and reduce costs. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. Responsibilities Ensure solutions align with business requirements and industry best practices. Gather, document, and analyze business requirements through stakeholder interviews, workshops, and surveys. Collaborate with business stakeholders and technical teams to translate business needs into functional specifications. Conduct detailed analysis of business processes to identify areas for improvement and automation opportunities. Develop and present business cases and solution proposals to stakeholders for approval and implementation. Facilitate communication and coordination between business units and IT teams to ensure alignment and project success. Create workflow diagrams, use cases, and other documentation to support project requirements and design. Monitor and report on project progress, addressing any issues or roadblocks to ensure timely delivery. Perform gap analysis between current and desired business processes and systems, recommending solutions for bridging the gaps. Conduct user acceptance testing (UAT) and validate solutions against business requirements. Stay informed about industry trends and best practices, advocating for the adoption of innovative technologies and methodologies.Experience in customer-facing roles, with the ability to understand and translate business requirements effectively. Proven experience in collaborating with developers and technical teams. Excellent communication, presentation, and interpersonal skills. Experience in creating business cases and obtaining stakeholder sign-off. Strong problem-solving skills and attention to detail. Mandatory Skill Sets RPA Framework, Business requirement gathering and assessment Preferred Skill Sets RPA Framework, Business requirement gathering and assessment Years Of Experience Required 1-4 year Education Qualification B.Tech/MBA Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Master of Business Administration, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Large Language Model (LLM) Fine-Tuning, Python (Programming Language) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Methodology, Automation Algorithms, Automation Engineering, Automation Framework Design and Development, Automation Programming, Automation Solutions, Automation Studio, Automation System Efficiency, Blue Prism, Business Analysis, Business Performance Management, Business Process Analysis, Business Process Automation (BPA), Business Transformation, Business Value Optimization, C++ Programming Language, Cognitive Automation, Communication, Conducting Discovery, Configuration Management (CM), Continuous Process Improvement, Data Analytics {+ 31 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Position responsibilities and expectations Designing and building analytical /DL/ ML algorithms using Python, R and other statistical tools. Strong data representation and lucid presentation (of analysis/modelling output) using Python, R Markdown, Power Point, Excel etc. Ability to learn new scripting language or analytics platform. Technical Skills required (must have) HandsOn Exposure to Generative AI (Design, development of GenAI application in production) Strong understanding of RAG, Vector Database, Lang Chain and multimodal AI applications. Strong understanding of deploying and optimizing AI application in production. Strong knowledge of statistical and data mining techniques like Linear & Logistic Regression analysis, Decision trees, Bagging, Boosting, Time Series and Non-parametric analysis. Strong knowledge of DL & Neural Network Architectures (CNN, RNN, LSTM, Transformers etc.) Strong knowledge of SQL and R/Python and experience with distribute data/computing tools/IDEs. Experience in advanced Text Analytics (NLP, NLU, NLG). Strong hands-on experience of end-to-end statistical model development and implementation Understanding of LLMOps, ML Ops for scalable ML development. Basic understanding of DevOps and deployment of models into production (PyTorch, TensorFlow etc.). Expert level proficiency algorithm building languages like SQL, R and Python and data visualization tools like Shiny, Qlik, Power BI etc. Exposure to Cloud Platform (Azure or AWS or GCP) technologies and services like Azure AI/ Sage maker/Vertex AI, Auto ML, Azure Index, Azure Functions, OCR, OpenAI, storage, scaling etc. Technical Skills required (Any one or more) Experience in video/ image analytics (Computer Vision) Experience in IoT/ machine logs data analysis Exposure to data analytics platforms like Domino Data Lab, c3.ai, H2O, Alteryx or KNIME Expertise in Cloud analytics platforms (Azure, AWS or Google) Experience in Process Mining with expertise in Celonis or other tools Proven capability in using Generative AI services like OpenAI, Google (Gemini) Understanding of Agentic AI Framework (Lang Graph, Auto gen etc.) Understanding of fine-tuning for pre-trained models like GPT, LLaMA, Claude etc. using LoRA, QLoRA and PEFT technique. Proven capability in building customized models from open-source distributions like Llama, Stable Diffusion Mandatory Skill Sets AI chatbots, Data structures, GenAI object-oriented programming, IDE, API, LLM Prompts, Streamlit Preferred Skill Sets AI chatbots, Data structures, GenAI object-oriented programming, IDE, API, LLM Prompts, Streamlit Years Of Experience Required 3-6 Years Education Qualification BE, B. Tech, M. Tech, M. Stat, Ph.D., M.Sc. (Stats / Maths) Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Technology, Doctor of Philosophy, Bachelor of Engineering Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Chatbots, Data Structures, Generative AI Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, AI Implementation, Analytical Thinking, C++ Programming Language, Communication, Complex Data Analysis, Creativity, Data Analysis, Data Infrastructure, Data Integration, Data Modeling, Data Pipeline, Data Quality, Deep Learning, Embracing Change, Emotional Regulation, Empathy, GPU Programming, Inclusion, Intellectual Curiosity, Java (Programming Language), Learning Agility, Machine Learning {+ 25 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 week ago

Apply

5.0 years

45 Lacs

Chandigarh, India

Remote

Experience : 5.00 + years Salary : INR 4500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Portcast) What do you need for this opportunity? Must have skills required: Spark, Generative AI models, LLM, rag, AWS, Docker, GCP, Kafka, Kubernetes, Machine Learning, Python, SQL Portcast is Looking for: About the role: We are looking for a Senior Machine Learning Engineer who specializes in deploying ML and AI models into production. You will handle the full lifecycle—from research and model building to deployment and scaling in real-world environments. This hands-on role requires designing robust algorithms that address key business problems, particularly in visibility, prediction, demand forecasting, and freight audit. Your focus will be on ensuring model accuracy, reliability, and scalability in production systems. What You’ll Do: Develop and deploy machine learning models from initial research to production, ensuring scalability and performance in live environments Own the end-to-end ML pipeline, including data processing, model development, testing, deployment, and continuous optimization Design and implement machine learning algorithms that address key business problems that our product focuses on in visibility, prediction, demand forecasting and freight audit Ensure reliable and scalable ML infrastructure, automating deployment and monitoring processes using MLOps best practices Perform feature engineering, model tuning, and validation to ensure that models are production-ready and optimized for performance Build, test, and deploy real-time prediction models, maintaining version control and performance tracking To thrive in this role, you must have: Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or a related field 5+ years of experience in building, deploying, and scaling machine learning models in production environments Experience deploying Generative AI models in production environments, with a strong understanding of Retrieval-Augmented Generation (RAG), AI Agents, and expertise in prompt engineering techniques Proven experience with the full product lifecycle, taking models from R&D to deployment in fast-paced environments Experience working in a product-based company, preferably within a startup environment with early-stage technical product development Strong expertise in Python and SQL, along with experience in cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes) Experience with real-time data processing, anomaly detection, and time-series forecasting in production Experience working with large datasets and big data technologies like Spark and Kafka to build scalable solutions First-principles thinking and excellent problem-solving skills, with a proactive approach to addressing challenges A self-starter mentality, with the ability to take ownership of projects from end to end and work autonomously to drive results Excellent communication skills, with the ability to convey complex technical concepts and a strong customer-obsessed mindset Engagement Type: Direct-hire Job Type: Permanent Location: Remote Working time: 9:00 AM to 6:00 PM IST 5 rounds 15 mins - HR screening call with G 30 mins - Interview with HM 3-5 days- Take Assignment 30 mins - Tech panel interview 30 mins - CEO interview (cultural fit round) How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Microsoft Management Level Associate Job Description & Summary At PwC, our people in integration and platform architecture focus on designing and implementing seamless integration solutions and robust platform architectures for clients. They enable efficient data flow and optimise technology infrastructure for enhanced business performance. Those in solution architecture at PwC will design and implement innovative technology solutions to meet clients' business needs. You will leverage your experience in analysing requirements, developing technical designs to enable the successful delivery of solutions. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Job Description And Summary We are looking for experienced software developers with: Self-starter and goal-oriented with strong analytical and problem-solving skills. willingness and to learn new technologies and adapt to changing project requirements ability to prioritize tasks and manage time effectively to meet deadlines good verbal and written communication skills ability to work collaboratively in a team setting Responsibilities Design and Develop: Write clean, scalable code using .NET programming language- C#.Excellent working knowledge of OOPs concepts & design patterns. Strong hands-on mobile application software development for windows/cloud platform Application Maintenance: Maintain and improve existing software applications. Debugging: Troubleshoot and resolve software defects and issues. Collaboration: Work collaboratively with other developers, designers, and product managers to deliver high-quality products. Testing: Conduct unit testing and participate in code reviews to ensure quality standards are met. Documentation: Create and maintain documentation for software applications and systems. Deployment: Assist in the deployment of applications across various environments. Performance Tuning: Optimize application performance and ensure responsiveness. Stay Updated: Keep up-to-date with the latest industry trends and technologies to enhance skills and knowledge. Experience with computer Graphics and system performance analysis will be a strong plus. Mandatory Skill Sets Strong understanding of the .NET Framework, .NET Core; proficiency in C# Familiarity with Web API development and RESTful services Experience with Entity Framework or ADO.NET for data access Strong skills in SQL; ability to design and optimize queries and work with databases like SQL Server Knowledge of HTML, CSS, and JavaScript Experience with front-end frameworks like Angular, React, or Vue.js is a plus Experience with version control systems, particularly Git Familiarity with unit testing frameworks such as MSTest, NUnit, or xUnit Understanding of common design patterns and best practices in software architecture Experience with CI/CD tools and pipelines Exposure to Agile methodology Past Experience of working in C/C++ would be a plus. Experience with computer Graphics and system performance analysis will be a strong plus. Certifications/Credentials*AZ-900: Azure Fundamentals AZ-204: Azure Developer Associate Preferred Skill Sets NET Developer Years Of Experience Required 2-6 yrs Education qualification BTech/BE/MTech from reputed institution/university as per the hiring norms Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Bachelor of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills NET Core, .NET Micro Framework, C Sharp (Programming Language) Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Amazon Web Services (AWS), Architectural Engineering, Brainstorm Facilitation, Business Impact Analysis (BIA), Business Process Modeling, Business Requirements Analysis, Business Systems, Business Value Analysis, Cloud Strategy, Communication, Competitive Advantage, Competitive Analysis, Conducting Research, Emotional Regulation, Empathy, Enterprise Architecture, Enterprise Integration, Evidence-Based Practice (EBP), Feasibility Studies, Google Cloud Platform, Growth Management, Inclusion {+ 36 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Not Specified Available for Work Visa Sponsorship? No Government Clearance Required? No Job Posting End Date

Posted 1 week ago

Apply

3.0 years

0 Lacs

Greater Kolkata Area

On-site

Line of Service Advisory Industry/Sector Not Applicable Specialism Data, Analytics & AI Management Level Associate Job Description & Summary At PwC, our people in data and analytics engineering focus on leveraging advanced technologies and techniques to design and develop robust data solutions for clients. They play a crucial role in transforming raw data into actionable insights, enabling informed decision-making and driving business growth. Those in artificial intelligence and machine learning at PwC will focus on developing and implementing advanced AI and ML solutions to drive innovation and enhance business processes. Your work will involve designing and optimising algorithms, models, and systems to enable intelligent decision-making and automation. Why PWC At PwC, you will be part of a vibrant community of solvers that leads with trust and creates distinctive outcomes for our clients and communities. This purpose-led and values-driven work, powered by technology in an environment that drives innovation, will enable you to make a tangible impact in the real world. We reward your contributions, support your wellbeing, and offer inclusive benefits, flexibility programmes and mentorship that will help you thrive in work and life. Together, we grow, learn, care, collaborate, and create a future of infinite experiences for each other. Learn more about us. At PwC, we believe in providing equal employment opportunities, without any discrimination on the grounds of gender, ethnic background, age, disability, marital status, sexual orientation, pregnancy, gender identity or expression, religion or other beliefs, perceived differences and status protected by law. We strive to create an environment where each one of our people can bring their true selves and contribute to their personal growth and the firm’s growth. To enable this, we have zero tolerance for any discrimination and harassment based on the above considerations. " Responsibilities Position responsibilities and expectations Designing and building analytical /DL/ ML algorithms using Python, R and other statistical tools. Strong data representation and lucid presentation (of analysis/modelling output) using Python, R Markdown, Power Point, Excel etc. Ability to learn new scripting language or analytics platform. Technical Skills required (must have) HandsOn Exposure to Generative AI (Design, development of GenAI application in production) Strong understanding of RAG, Vector Database, Lang Chain and multimodal AI applications. Strong understanding of deploying and optimizing AI application in production. Strong knowledge of statistical and data mining techniques like Linear & Logistic Regression analysis, Decision trees, Bagging, Boosting, Time Series and Non-parametric analysis. Strong knowledge of DL & Neural Network Architectures (CNN, RNN, LSTM, Transformers etc.) Strong knowledge of SQL and R/Python and experience with distribute data/computing tools/IDEs. Experience in advanced Text Analytics (NLP, NLU, NLG). Strong hands-on experience of end-to-end statistical model development and implementation Understanding of LLMOps, ML Ops for scalable ML development. Basic understanding of DevOps and deployment of models into production (PyTorch, TensorFlow etc.). Expert level proficiency algorithm building languages like SQL, R and Python and data visualization tools like Shiny, Qlik, Power BI etc. Exposure to Cloud Platform (Azure or AWS or GCP) technologies and services like Azure AI/ Sage maker/Vertex AI, Auto ML, Azure Index, Azure Functions, OCR, OpenAI, storage, scaling etc. Technical Skills required (Any one or more) Experience in video/ image analytics (Computer Vision) Experience in IoT/ machine logs data analysis Exposure to data analytics platforms like Domino Data Lab, c3.ai, H2O, Alteryx or KNIME Expertise in Cloud analytics platforms (Azure, AWS or Google) Experience in Process Mining with expertise in Celonis or other tools Proven capability in using Generative AI services like OpenAI, Google (Gemini) Understanding of Agentic AI Framework (Lang Graph, Auto gen etc.) Understanding of fine-tuning for pre-trained models like GPT, LLaMA, Claude etc. using LoRA, QLoRA and PEFT technique. Proven capability in building customized models from open-source distributions like Llama, Stable Diffusion Mandatory Skill Sets AI chatbots, Data structures, GenAI object-oriented programming, IDE, API, LLM Prompts, Streamlit Preferred Skill Sets AI chatbots, Data structures, GenAI object-oriented programming, IDE, API, LLM Prompts, Streamlit Years Of Experience Required 3-6 Years Education Qualification BE, B. Tech, M. Tech, M. Stat, Ph.D., M.Sc. (Stats / Maths) Education (if blank, degree and/or field of study not specified) Degrees/Field of Study required: Doctor of Philosophy, Bachelor of Engineering, Bachelor of Technology Degrees/Field Of Study Preferred Certifications (if blank, certifications not specified) Required Skills Chatbots, Data Structures, Generative AI Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, AI Implementation, C++ Programming Language, Communication, Complex Data Analysis, Data Analysis, Data Infrastructure, Data Integration, Data Modeling, Data Pipeline, Data Quality, Deep Learning, Emotional Regulation, Empathy, GPU Programming, Inclusion, Intellectual Curiosity, Java (Programming Language), Machine Learning, Machine Learning Libraries, Named Entity Recognition, Natural Language Processing (NLP), Natural Language Toolkit (NLTK) {+ 20 more} Desired Languages (If blank, desired languages not specified) Travel Requirements Available for Work Visa Sponsorship? Government Clearance Required? Job Posting End Date

Posted 1 week ago

Apply

5.0 years

45 Lacs

Ahmedabad, Gujarat, India

Remote

Experience : 5.00 + years Salary : INR 4500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Portcast) What do you need for this opportunity? Must have skills required: Spark, Generative AI models, LLM, rag, AWS, Docker, GCP, Kafka, Kubernetes, Machine Learning, Python, SQL Portcast is Looking for: About the role: We are looking for a Senior Machine Learning Engineer who specializes in deploying ML and AI models into production. You will handle the full lifecycle—from research and model building to deployment and scaling in real-world environments. This hands-on role requires designing robust algorithms that address key business problems, particularly in visibility, prediction, demand forecasting, and freight audit. Your focus will be on ensuring model accuracy, reliability, and scalability in production systems. What You’ll Do: Develop and deploy machine learning models from initial research to production, ensuring scalability and performance in live environments Own the end-to-end ML pipeline, including data processing, model development, testing, deployment, and continuous optimization Design and implement machine learning algorithms that address key business problems that our product focuses on in visibility, prediction, demand forecasting and freight audit Ensure reliable and scalable ML infrastructure, automating deployment and monitoring processes using MLOps best practices Perform feature engineering, model tuning, and validation to ensure that models are production-ready and optimized for performance Build, test, and deploy real-time prediction models, maintaining version control and performance tracking To thrive in this role, you must have: Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or a related field 5+ years of experience in building, deploying, and scaling machine learning models in production environments Experience deploying Generative AI models in production environments, with a strong understanding of Retrieval-Augmented Generation (RAG), AI Agents, and expertise in prompt engineering techniques Proven experience with the full product lifecycle, taking models from R&D to deployment in fast-paced environments Experience working in a product-based company, preferably within a startup environment with early-stage technical product development Strong expertise in Python and SQL, along with experience in cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes) Experience with real-time data processing, anomaly detection, and time-series forecasting in production Experience working with large datasets and big data technologies like Spark and Kafka to build scalable solutions First-principles thinking and excellent problem-solving skills, with a proactive approach to addressing challenges A self-starter mentality, with the ability to take ownership of projects from end to end and work autonomously to drive results Excellent communication skills, with the ability to convey complex technical concepts and a strong customer-obsessed mindset Engagement Type: Direct-hire Job Type: Permanent Location: Remote Working time: 9:00 AM to 6:00 PM IST 5 rounds 15 mins - HR screening call with G 30 mins - Interview with HM 3-5 days- Take Assignment 30 mins - Tech panel interview 30 mins - CEO interview (cultural fit round) How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

5.0 years

45 Lacs

Kolkata, West Bengal, India

Remote

Experience : 5.00 + years Salary : INR 4500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Portcast) What do you need for this opportunity? Must have skills required: Spark, Generative AI models, LLM, rag, AWS, Docker, GCP, Kafka, Kubernetes, Machine Learning, Python, SQL Portcast is Looking for: About the role: We are looking for a Senior Machine Learning Engineer who specializes in deploying ML and AI models into production. You will handle the full lifecycle—from research and model building to deployment and scaling in real-world environments. This hands-on role requires designing robust algorithms that address key business problems, particularly in visibility, prediction, demand forecasting, and freight audit. Your focus will be on ensuring model accuracy, reliability, and scalability in production systems. What You’ll Do: Develop and deploy machine learning models from initial research to production, ensuring scalability and performance in live environments Own the end-to-end ML pipeline, including data processing, model development, testing, deployment, and continuous optimization Design and implement machine learning algorithms that address key business problems that our product focuses on in visibility, prediction, demand forecasting and freight audit Ensure reliable and scalable ML infrastructure, automating deployment and monitoring processes using MLOps best practices Perform feature engineering, model tuning, and validation to ensure that models are production-ready and optimized for performance Build, test, and deploy real-time prediction models, maintaining version control and performance tracking To thrive in this role, you must have: Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or a related field 5+ years of experience in building, deploying, and scaling machine learning models in production environments Experience deploying Generative AI models in production environments, with a strong understanding of Retrieval-Augmented Generation (RAG), AI Agents, and expertise in prompt engineering techniques Proven experience with the full product lifecycle, taking models from R&D to deployment in fast-paced environments Experience working in a product-based company, preferably within a startup environment with early-stage technical product development Strong expertise in Python and SQL, along with experience in cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes) Experience with real-time data processing, anomaly detection, and time-series forecasting in production Experience working with large datasets and big data technologies like Spark and Kafka to build scalable solutions First-principles thinking and excellent problem-solving skills, with a proactive approach to addressing challenges A self-starter mentality, with the ability to take ownership of projects from end to end and work autonomously to drive results Excellent communication skills, with the ability to convey complex technical concepts and a strong customer-obsessed mindset Engagement Type: Direct-hire Job Type: Permanent Location: Remote Working time: 9:00 AM to 6:00 PM IST 5 rounds 15 mins - HR screening call with G 30 mins - Interview with HM 3-5 days- Take Assignment 30 mins - Tech panel interview 30 mins - CEO interview (cultural fit round) How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

5.0 years

45 Lacs

Guwahati, Assam, India

Remote

Experience : 5.00 + years Salary : INR 4500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Portcast) What do you need for this opportunity? Must have skills required: Spark, Generative AI models, LLM, rag, AWS, Docker, GCP, Kafka, Kubernetes, Machine Learning, Python, SQL Portcast is Looking for: About the role: We are looking for a Senior Machine Learning Engineer who specializes in deploying ML and AI models into production. You will handle the full lifecycle—from research and model building to deployment and scaling in real-world environments. This hands-on role requires designing robust algorithms that address key business problems, particularly in visibility, prediction, demand forecasting, and freight audit. Your focus will be on ensuring model accuracy, reliability, and scalability in production systems. What You’ll Do: Develop and deploy machine learning models from initial research to production, ensuring scalability and performance in live environments Own the end-to-end ML pipeline, including data processing, model development, testing, deployment, and continuous optimization Design and implement machine learning algorithms that address key business problems that our product focuses on in visibility, prediction, demand forecasting and freight audit Ensure reliable and scalable ML infrastructure, automating deployment and monitoring processes using MLOps best practices Perform feature engineering, model tuning, and validation to ensure that models are production-ready and optimized for performance Build, test, and deploy real-time prediction models, maintaining version control and performance tracking To thrive in this role, you must have: Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or a related field 5+ years of experience in building, deploying, and scaling machine learning models in production environments Experience deploying Generative AI models in production environments, with a strong understanding of Retrieval-Augmented Generation (RAG), AI Agents, and expertise in prompt engineering techniques Proven experience with the full product lifecycle, taking models from R&D to deployment in fast-paced environments Experience working in a product-based company, preferably within a startup environment with early-stage technical product development Strong expertise in Python and SQL, along with experience in cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes) Experience with real-time data processing, anomaly detection, and time-series forecasting in production Experience working with large datasets and big data technologies like Spark and Kafka to build scalable solutions First-principles thinking and excellent problem-solving skills, with a proactive approach to addressing challenges A self-starter mentality, with the ability to take ownership of projects from end to end and work autonomously to drive results Excellent communication skills, with the ability to convey complex technical concepts and a strong customer-obsessed mindset Engagement Type: Direct-hire Job Type: Permanent Location: Remote Working time: 9:00 AM to 6:00 PM IST 5 rounds 15 mins - HR screening call with G 30 mins - Interview with HM 3-5 days- Take Assignment 30 mins - Tech panel interview 30 mins - CEO interview (cultural fit round) How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply

5.0 years

45 Lacs

Cuttack, Odisha, India

Remote

Experience : 5.00 + years Salary : INR 4500000.00 / year (based on experience) Expected Notice Period : 15 Days Shift : (GMT+05:30) Asia/Kolkata (IST) Opportunity Type : Remote Placement Type : Full Time Indefinite Contract(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - Portcast) What do you need for this opportunity? Must have skills required: Spark, Generative AI models, LLM, rag, AWS, Docker, GCP, Kafka, Kubernetes, Machine Learning, Python, SQL Portcast is Looking for: About the role: We are looking for a Senior Machine Learning Engineer who specializes in deploying ML and AI models into production. You will handle the full lifecycle—from research and model building to deployment and scaling in real-world environments. This hands-on role requires designing robust algorithms that address key business problems, particularly in visibility, prediction, demand forecasting, and freight audit. Your focus will be on ensuring model accuracy, reliability, and scalability in production systems. What You’ll Do: Develop and deploy machine learning models from initial research to production, ensuring scalability and performance in live environments Own the end-to-end ML pipeline, including data processing, model development, testing, deployment, and continuous optimization Design and implement machine learning algorithms that address key business problems that our product focuses on in visibility, prediction, demand forecasting and freight audit Ensure reliable and scalable ML infrastructure, automating deployment and monitoring processes using MLOps best practices Perform feature engineering, model tuning, and validation to ensure that models are production-ready and optimized for performance Build, test, and deploy real-time prediction models, maintaining version control and performance tracking To thrive in this role, you must have: Bachelor’s, Master’s, or PhD in Computer Science, Engineering, or a related field 5+ years of experience in building, deploying, and scaling machine learning models in production environments Experience deploying Generative AI models in production environments, with a strong understanding of Retrieval-Augmented Generation (RAG), AI Agents, and expertise in prompt engineering techniques Proven experience with the full product lifecycle, taking models from R&D to deployment in fast-paced environments Experience working in a product-based company, preferably within a startup environment with early-stage technical product development Strong expertise in Python and SQL, along with experience in cloud platforms (AWS, GCP, or Azure) and containerization (Docker, Kubernetes) Experience with real-time data processing, anomaly detection, and time-series forecasting in production Experience working with large datasets and big data technologies like Spark and Kafka to build scalable solutions First-principles thinking and excellent problem-solving skills, with a proactive approach to addressing challenges A self-starter mentality, with the ability to take ownership of projects from end to end and work autonomously to drive results Excellent communication skills, with the ability to convey complex technical concepts and a strong customer-obsessed mindset Engagement Type: Direct-hire Job Type: Permanent Location: Remote Working time: 9:00 AM to 6:00 PM IST 5 rounds 15 mins - HR screening call with G 30 mins - Interview with HM 3-5 days- Take Assignment 30 mins - Tech panel interview 30 mins - CEO interview (cultural fit round) How to apply for this opportunity? Step 1: Click On Apply! And Register or Login on our portal. Step 2: Complete the Screening Form & Upload updated Resume Step 3: Increase your chances to get shortlisted & meet the client for the Interview! About Uplers: Our goal is to make hiring reliable, simple, and fast. Our role will be to help all our talents find and apply for relevant contractual onsite opportunities and progress in their career. We will support any grievances or challenges you may face during the engagement. (Note: There are many more opportunities apart from this on the portal. Depending on the assessments you clear, you can apply for them as well). So, if you are ready for a new challenge, a great work environment, and an opportunity to take your career to the next level, don't hesitate to apply today. We are waiting for you!

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies