Jobs
Interviews

18256 Tuning Jobs - Page 9

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Title: Network Architect (Network Traffic Intelligence & Flow Data Systems) Location : Pune, India (with Travel to Onsite) Experience Required : 8+ years in network traffic monitoring and flow data systems, with 2+ years of hands-on experience in configuring and deploying nProbe Cento in high-throughput environments. Overview : We are seeking a specialist with deep expertise in network traffic probes , specifically nProbe Cento , to support the deployment, configuration, and integration of flow record generation systems. The consultant will work closely with Kafka developers, solution architects, and network teams to ensure accurate, high-performance flow data capture and export. This role is critical to ensure the scalability, observability, and compliance of the network traffic record infrastructure. Key Responsibilities : Design and document the end-to-end architecture for network traffic record systems, including flow ingestion, processing, storage, and retrieval. Deploy and configure nProbe Cento on telecom-grade network interfaces. Tune probe performance using PF_RING ZC drivers for high-speed traffic capture. Configure IPFIX/NetFlow export and integrate with Apache Kafka for real-time data streaming. Set up DPI rules to identify application-level traffic (e.g., popular messaging and social media applications). Align flow record schema with Detail Record specification. Lead the integration of nProbe Cento, Kafka, Apache Spark, and Cloudera CDP components into a unified data pipeline. Collaborate with Kafka and API teams to ensure compatibility of data formats and ingestion pipelines. Define interface specifications, deployment topologies, and data schemas for flow records and detail records. Monitor probe health, performance, and packet loss; implement logging and alerting mechanisms. Collaborate with security teams to implement data encryption, access control, and compliance with regulatory standards. Guide development and operations teams through SIT/UAT, performance tuning, and production rollout. Provide documentation, training, and handover materials for long-term operational support. Required Skills & Qualifications : Proven hands-on experience with nProbe Cento in production environments. Strong understanding of IPFIX, NetFlow, sFlow, and flow-based monitoring principles. Experience with Cloudera SDX, Ranger, Atlas, and KMS for data governance and security. Familiarity with HashiCorp Vault for secrets management. Strong understanding of network packet brokers (e.g., Gigamon, Ixia) and traffic aggregation strategies. Proven ability to design high-throughput , fault-tolerant, and cloud-native architectures. Experience with Kafka integration , including topic configuration and message formatting. Familiarity with DPI technologies and application traffic classification. Proficiency in Linux system administration, shell scripting, and network interface tuning . Knowledge of telecom network interfaces and traffic tapping strategies . Experience with PF_RING, ntopng , and related ntop tools (preferred). Ability to work independently and collaboratively with cross-functional technical teams. Excellent documentation and communication skills. Certifications in Cloudera, Kafka, or cloud platforms (e.g., AWS Architect, GCP Data Engineer) will be advantageous. A little about us: Innova Solutions is a diverse and award-winning global technology services partner. We provide our clients with strategic technology, talent, and business transformation solutions, enabling them to be leaders in their field. Founded in 1998, headquartered in Atlanta (Duluth), Georgia. Employs over 50,000 professionals worldwide, with annual revenue approaching $3.0B. Delivers strategic technology and business transformation solutions globally. Operates through global delivery centers across North America, Asia, and Europe. Provides services for data center migration and workload development for cloud service providers. Awardee of prestigious recognitions including: Women’s Choice Awards - Best Companies to Work for Women & Millennials, 2024 Forbes, America’s Best Temporary Staffing and Best Professional Recruiting Firms, 2023 American Best in Business, Globee Awards, Healthcare Vulnerability Technology Solutions, 2023 Global Health & Pharma, Best Full Service Workforce Lifecycle Management Enterprise, 2023 Received 3 SBU Leadership in Business Awards Stevie International Business Awards, Denials Remediation Healthcare Technology Solutions, 2023

Posted 1 day ago

Apply

5.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Position Title: Software Engineer Consultant/Expert – GCP Data Engineer 34350 Location: Chennai Engagement Type: Contract Compensation: Up to ₹18 LPA Notice Period: Immediate joiners preferred Work Mode: Onsite Role Overview This role is for a proactive Google Cloud Platform (GCP) Data Engineer who will contribute to the modernization of a cloud-based enterprise data warehouse. The ideal candidate will focus on integrating diverse data sources to support advanced analytics and AI/ML-driven solutions, as well as designing scalable pipelines and data products for real-time and batch processing. This opportunity is ideal for individuals who bring both architectural thinking and hands-on experience with GCP services, big data processing, and modern DevOps practices. Key Responsibilities Design and implement scalable, cloud-native data pipelines and solutions using GCP technologies Develop ETL/ELT processes to ingest and transform data from legacy and modern platforms Collaborate with analytics, AI/ML, and product teams to enable data accessibility and usability Analyze large datasets and perform impact assessments across various functional areas Build data products (data marts, APIs, views) that power analytical and operational platforms Integrate batch and real-time data using tools like Pub/Sub, Kafka, Dataflow, and Cloud Composer Operationalize deployments using CI/CD pipelines and infrastructure as code Ensure performance tuning, optimization, and scalability of data platforms Contribute to best practices in cloud data security, governance, and compliance Provide mentorship, guidance, and knowledge-sharing within cross-functional teams Mandatory Skills GCP expertise with hands-on use of services including: BigQuery, Dataflow, Data Fusion, Dataform, Dataproc Cloud Composer (Airflow), Cloud SQL, Compute Engine Cloud Functions, Cloud Run, Cloud Build, App Engine Strong knowledge of SQL, data modeling, and data architecture Minimum 5+ years of experience in SQL and ETL development At least 3 years of experience in GCP cloud environments Experience with Python, Java, or Apache Beam Proficiency in Terraform, Docker, Tekton, and GitHub Familiarity with Apache Kafka, Pub/Sub, and microservices architecture Understanding of AI/ML integration, data science concepts, and production datasets Preferred Experience Hands-on expertise in container orchestration (e.g., Kubernetes) Experience working in regulated environments (e.g., finance, insurance) Knowledge of DevOps pipelines, CI/CD, and infrastructure automation Background in coaching or mentoring junior data engineers Experience with data governance, compliance, and security best practices in the cloud Use of project management tools such as JIRA Proven ability to work independently in fast-paced or ambiguous environments Strong communication and collaboration skills to interact with cross-functional teams Education Requirements Required: Bachelor's degree in Computer Science, Information Systems, Engineering, or related field Preferred: Master's degree or relevant industry certifications (e.g., GCP Data Engineer Certification) Skills: bigquery,cloud sql,ml,apache beam,app engine,gcp,dataflow,microservices architecture,cloud functions,compute engine,project management tools,data science concepts,security best practices,pub/sub,ci/cd,compliance,cloud run,java,cloud build,jira,data,pipelines,dataproc,sql,tekton,python,github,data modeling,cloud composer,terraform,data fusion,cloud,data architecture,apache kafka,ai/ml integration,docker,data governance,infrastructure automation,dataform

Posted 1 day ago

Apply

2.0 years

0 Lacs

Pune, Maharashtra, India

On-site

mthree is seeking a Java Developer to join a highly regarded Multinational Investment Bank and Financial Services Company. Job Description: Role: Java Developer Team: Payment Gateway Location: Pune (Hybrid model with 2-3 days per week in the office) Key Responsibility Develop and Maintain Applications: Design, develop, and maintain server-side applications using Java 8 to ensure high performance and responsiveness to requests from the front-end. • Scalability Solutions: Architect and implement scalable solutions for client risk management, ensuring the system can handle large volumes of transactions and data. • Data Streaming and Caching: Utilize Kafka or Redis for efficient data streaming and caching, ensuring real-time data processing and low-latency access. • Multithreading and Synchronization: Implement multithreading and synchronization techniques to enhance application performance and ensure thread safety. • Microservices Development: Develop and deploy microservices using Spring Boot, ensuring modularity and ease of maintenance. • Design Patterns: Apply design patterns to solve complex software design problems, ensuring code reusability and maintainability. • Linux Optimization: Ensure applications are optimized for Linux environments, including performance tuning and troubleshooting. • Collaboration: Collaborate with cross-functional teams, including front-end developers, QA engineers, and product managers, to define, design, and ship new features. • Troubleshooting: Troubleshoot and resolve production issues, ensuring minimal downtime and optimal performance. Requirements: • Educational Background: Bachelor’s degree in computer science, Engineering, or a related field. • Programming Expertise: Proven experience (c2-5 years) in Java 8+ programming, with a strong understanding of object-oriented principles and design. • Data Technologies: Understanding of Kafka or Redis (or similar Cache), including setup, configuration, and optimization. • Concurrency: Experience with multithreading and synchronization, ensuring efficient and safe execution of concurrent processes. • Frameworks: Proficiency in Spring Boot, including developing RESTful APIs and integrating with other services. • Design Patterns: Familiarity with design patterns and their application in solving software design problems. • Operating Systems: Solid understanding of Linux operating systems, including shell scripting and system administration. • Problem-Solving: Excellent problem-solving skills and attention to detail, with the ability to debug and optimize code. • Communication: Strong communication and teamwork skills, with the ability to work effectively in a collaborative environment. Preferred Qualifications: • Industry Experience: Experience in the financial services industry is a plus. • Additional Skills: Knowledge of other programming languages and technologies, such as Python or Scala. • DevOps Practices: Familiarity with DevOps practices and tools, including CI/CD pipelines, containerization (Docker), and orchestration (Kubernetes). Java Developer

Posted 1 day ago

Apply

1.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Job Description About Oracle Analytics & Big Data Service: Oracle Analytics is a complete platform that supports every role within analytics, offering cloud-native services or on-premises solutions without compromising security or governance. Our platform delivers a unified system for managing everything from data collection to decision-making, with seamless integration of AI and machine learning to help businesses accelerate productivity and uncover critical insights. Oracle Big Data Service, a part of Oracle Analytics, is a fully managed, automated cloud service designed to help enterprises create scalable Hadoop-based data lakes. The service work scope encompasses not just good integration with OCI’s native infrastructure (security, cloud, storage, etc.) but also deep integration with other relevant cloud-native services in OCI. It includes doing cloud-native ways of doing service level patching & upgrades and maintaining high availability of the service in the face of random failures & planned downtimes in the underlying infrastructure (e.g., for things like patching the Linux kernels to take care of a security vulnerability). Developing systems for monitoring and getting telemetry into the service’s runtime characteristics and being able to take actions on the telemetry data is a part of the charter. We are interested in experienced engineers with expertise and passion for solving difficult problems in distributed systems and highly available services to join our Oracle Big Data Service team. In this role, you will be instrumental in building, maintaining, and enhancing our managed, cloud-native Big Data service focused on large-scale data processing and analytics. At Oracle, you can help, shape, design, and build innovative new systems from the ground up. These are exciting times in our space - we are growing fast, still at an early stage, and working on ambitious new initiatives. Engineers at any level can have significant technical and business impact. Minimum Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, or related technical field. Minimum of 1-2 years of experience in software development, with a focus on large-scale distributed systems, cloud services, or Big Data technologies. US passport holders. This is required by the position to access US Gov regions. Expertise in coding in Java, Python with emphasis on tuning/optimization Experience with Linux systems administration, troubleshooting, and security best practices in cloud environments. Experience with open-source software in the Big Data ecosystem Experience at an organization with operational/dev-ops culture Solid understanding of networking, storage, and security components related to cloud infrastructure. Solid foundation in data structures, algorithms, and software design with strong analytical and debugging skills. Preferred Qualifications: Hands-on experience with Hadoop ecosystem (HDFS, MapReduce, YARN), Spark, Kafka, Flink and other big data technologies. Proven expertise in cloud-native architectures and services, preferably within Oracle Cloud Infrastructure (OCI), AWS, Azure, or GCP. In-depth understanding of Java and JVM mechanics Good problem-solving skills and the ability to work in a fast-paced, agile environment. Responsibilities Key Responsibilities: Participate in development and maintenance of a scalable and secure Hadoop-based data lake service. Code, integrate, and operationalize open and closed source data ecosystem components for Oracle cloud service offerings Collaborate with cross-functional teams including DevOps, Security, and Product Management to define and execute product roadmaps, service updates, and feature enhancements. Becoming an active member of the Apache open source community when working on open source components Ensure compliance with security protocols and industry best practices when handling large-scale data processing in the cloud. Qualifications Career Level - IC2 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law.

Posted 1 day ago

Apply

10.0 - 15.0 years

0 Lacs

New Delhi, Delhi, India

On-site

A Large AMC specializing in AIF and AMC is hiring for Senior PMS B2B Sales Location: #Delhi , #Mumbai, #Hyderabad Key Result Areas: • Achieving sales targets for the firm’s PMS& AIF Products in B2b channels across Wealth, PCG, Bank and IFA Channels • Empaneling Regional Distributors, RIAs and IFAs in mapped geography • Ability to understand the company’s Investment approach and product suite and communicate effectively with RM’s/ TL across various distribution channels • Ability to organize and address team huddles and client events across channels for driving sales • Developing and maintaining relationships with Channel partners by providing them exceptional sales support individualized customer service. • Working on the long term & short-term business directions to ensure maximum profitability in line with organizational objectives. • Analyzing latest marketing trends and tracking competitors’ activities and providing valuable inputs for fine tuning sales and marketing strategies. • Continuously work towards brand building Experience and Behavioral Competencies required: • 10-15 years in the asset management industry, selling PMS/AIF/Alternate products • Strong knowledge of market participants and existing relationships with RMs of channel partners/distributors especially from PCG, private wealth, private bank, large RIAs and MFDs • High achievement orientation towards the business targets and goals set • Highly motivated self-starter with an entrepreneurial mindset For further details , please call on 8286011441

Posted 1 day ago

Apply

6.0 years

0 Lacs

Delhi, India

On-site

About the Role We’re looking for top-tier AI/ML Engineers with 6+ years of experience to join our fast-paced and innovative team. If you thrive at the intersection of GenAI, Machine Learning, MLOps, and application development, we want to hear from you. You’ll have the opportunity to work on high-impact GenAI applications and build scalable systems that solve real business problems. Key Responsibilities Design, develop, and deploy GenAI applications using techniques like RAG (Retrieval Augmented Generation), prompt engineering, model evaluation, and LLM integration. Architect and build production-grade Python applications using frameworks such as FastAPI or Flask. Implement gRPC services, event-driven systems (Kafka, PubSub), and CI/CD pipelines for scalable deployment. Collaborate with cross-functional teams to frame business problems as ML use-cases — regression, classification, ranking, forecasting, and anomaly detection. Own end-to-end ML pipeline development: data preprocessing, feature engineering, model training/inference, deployment, and monitoring. Work with tools such as Airflow, Dagster, SageMaker, and MLflow to operationalize and orchestrate pipelines. Ensure model evaluation, A/B testing, and hyperparameter tuning is done rigorously for production systems. Must-Have Skills Hands-on experience with GenAI/LLM-based applications – RAG, Evals, vector stores, embeddings. Strong backend engineering using Python, FastAPI/Flask, gRPC, and event-driven architectures. Experience with CI/CD, infrastructure, containerization, and cloud deployment (AWS, GCP, or Azure). Proficient in ML best practices: feature selection, hyperparameter tuning, A/B testing, model explainability. Proven experience in batch data pipelines and training/inference orchestration. Familiarity with tools like Airflow/Dagster, SageMaker, and data pipeline architecture.

Posted 1 day ago

Apply

15.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

#HiringAlert Job Role: Principal Java Engineer Location: Hybrid – Kolkata (3–4 days per week in office) Industry: Gaming / Real-time Systems / Technology Employment Type: Full-Time Salary: Based on experience and aligned with industry standards. About the Role We’re a fast-growing start-up in the gaming industry, building high-performance, real-time platforms that power immersive digital experiences. We’re looking for a Principal Java Engineer to lead the design and development of scalable backend systems that support live, data-intensive applications. This is a hybrid role based in Kolkata, ideal for someone who thrives on solving technical challenges, enjoys taking ownership, and wants to build great software in a dynamic, informal, and high-energy environment. Key Responsibilities Design and develop scalable, resilient backend systems using Java (17+) and Spring Boot Architect APIs, microservices, and real-time backend components for gaming platforms Own backend infrastructure, deployment pipelines, monitoring, and system performance Collaborate with product and delivery teams to translate ideas into production-ready features Take full ownership of backend architecture — from planning to delivery and iteration Continuously improve code quality, engineering practices, and overall system design Required Skills & Experience 10–15 years of experience in backend engineering with strong expertise in Java (preferably 17+) and Spring Boot Proven experience building high-performance, distributed systems at scale Hands-on with cloud platforms (AWS, GCP, or Azure), Docker, and Kubernetes Strong understanding of SQL and NoSQL databases, caching (e.g., Redis), and messaging systems (Kafka, RabbitMQ) Solid skills in debugging, performance tuning, and system optimization Ability to work independently, make pragmatic decisions, and collaborate in a hybrid team setup Good to Have Experience in gaming, real-time platforms, or multiplayer systems Familiarity with Web Sockets, telemetry pipelines, or event-driven architecture Exposure to CI/CD pipelines, infrastructure as code, and observability tools Why Join Us? Work in a creative, fast-paced domain that blends engineering depth with product excitement Flat structure and high trust — focus on outcomes, not formalities Visible impact — everything you build will be used by real players in real time Informal, collaborative culture — where we take our work seriously, but not ourselves Flexible hybrid setup — 3 to 4 days a week in-office, with room for focused work and team alignment How to Apply Send your resume or portfolio to : talent@projectpietech.com We’d love to hear from engineers who are passionate about solving hard problems and building something exciting from the ground up. #HiringNow#JavaJobs#BackendEngineer#PrincipalEngineer#JavaDeveloper#SpringBoot #SoftwareEngineering#GamingIndustryJobs#RealTimeSystems#TechJobsIndia#KolkataJobs #EngineeringLeadership#MicroservicesArchitecture#CloudEngineering#JoinOurTeam #StartupJobs#ProjectPieTechnologies#JobAlert#NowHiring#WorkWithUs#CareerOpportunity #HiringEngineersJavaDeveloper#BackendDeveloper#Microservices#SoftwareArchitecture #CloudComputing#DistributedSystems#Kubernetes#Docker#Kafka#AWSJobs#DevOpsEngineering #NoSQL#WebSockets#RealTimeData#LifeAtProjectPie#JoinOurTeam#TechLeadership #InnovationDriven#BuildTheFuture#MakeAnImpact#EngineerTheFuture#TeamCulture#FlatHierarchy

Posted 1 day ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

The Testing, Integration and Design role at Adani in Hyderabad requires a skilled professional with min 7 years of experience in industry. Electro-optics and infrared (EOIR) professionals play a crucial role in designing, developing, Testing and maintaining systems that utilize electro optics systems. Also ensures system performance requirements, integrate technical parameters, and ensure compatibility across physical, functional, and program interfaces. Key Responsibilities of Role : Responsible for assembly and testing of Electro-Optical Infrared (EO/IR) system. Responsible to develop the complete EOIR system Responsible to carry out sensor bore-sighting. Responsible to carry out Fine tuning of sensors and lens. Responsible to conduct environmental tests for EOIR system. Responsible for field testing of system. Hands-on experience on Laser test equipment’s like power meter, M2 measurement unit, beam profiler, thermal imager. Qualifications : Bachelor's degree in Electronics/Mechanical/Opto electronics Engineering Experience : Work experience as an EOIR System for min 5 years Knowledge of Electro-Optical Infrared (EO/IR) Assembly. Knowledge of sensor bore-sighting Knowledge of Fine tuning of sensors and lens. Hands-on experience on Laser test equipment’s like power meter, M2 measurement unit, beam profiler, thermal image

Posted 1 day ago

Apply

4.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Profile Description We’re seeking someone to join our team as (System Operations Support Engineer) role bridges the gap between Operations and Engineering.The candidate fills the role of a Lead Operations Engineer and needs to perform the tasks required of both operations and engineering WM_Technology Wealth Management Technology is responsible for the design, development, delivery, and support of the technical solutions behind the products and services used by the Morgan Stanley Wealth Management Business. Practice areas include: Analytics, Intelligence, & Data Technology (AIDT), Client Platforms, Core Technology Services (CTS), Financial Advisor Platforms, Global Banking Technology (GBT), Investment Solutions Technology (IST), Institutional Wealth and Corporate Solutions Technology (IWCST), Technology Delivery Management (TDM), User Experience (UX), and the CAO team. Core Platform Services Core Platform Services is responsible for driving Resiliency, Automation, Performance, Stability, and Efficiency across Wealth Management Technology. Cloud & Infrastructure Engineering This is Associate position that manages and optimizes technical infrastructure and ensures the seamless operation of IT systems to support business needs effectively. Morgan Stanley is an industry leader in financial services, known for mobilizing capital to help governments, corporations, institutions, and individuals around the world achieve their financial goals. At Morgan Stanley India, we support the Firm’s global businesses, with critical presence across Institutional Securities, Wealth Management, and Investment management, as well as in the Firm’s infrastructure functions of Technology, Operations, Finance, Risk Management, Legal and Corporate & Enterprise Services. Morgan Stanley has been rooted in India since 1993, with campuses in both Mumbai and Bengaluru. We empower our multi-faceted and talented teams to advance their careers and make a global impact on the business. For those who show passion and grit in their work, there’s ample opportunity to move across the businesses for those who show passion and grit in their work. Interested in joining a team that’s eager to create, innovate and make an impact on the world? Read on… What You’ll Do In The Role Manage the implementation of application and infrastructure servers and ensure processes and tools are put in place to effectively manage and operate them throughout the system lifecycle. Proactively monitor applications and computer systems -take appropriate action to ensure acceptable performance and availability. Handle batch escalations and address functional and performance issues as needed. Develop automation to improve manual administration and maintenance of Linux systems Perform tuning of servers at both the operating system and application level for optimal performance. Play a lead role in working with other technology staff members to troubleshoot complex problems as they relate to network systems and interoperability with other middleware/platforms as appropriate. Create and document standard configurations and BCPs to ensure all systems are compliant with both corporate security standards and external audit guidelines. Engage in personal professional development to stay current with the demands of the position. Participate in off hours support as required. Develop and promote standard operating procedures. Conduct routine hardware and software health checks to ensure compliance with established standards, policies, and configuration guidelines. Develop and maintain a comprehensive operating system hardware and software database/library of supporting documentation. Produce reports on activities, make suggestions for improvements in the system, and provide technical expertise to all other members of sysops support staff as well as the user community. Mentor and train junior administration and support resources. What You’ll Bring To The Role At least 4 years’ relevant experience would generally be expected to find the skills required for this role 5+ years experience in supporting and administering Linux/Unix environments. 5+ years of multi-tier application support. Bachelor’s degree in Computer Science or related job experience. Experience with Cloud operations and supporting enterprise applications in a hybrid cloud environment. RHCT Redhat Certified Technician. 5+ years of experience in the following areas: oComplex troubleshooting skills across a multi-tier application. oCreate and maintain complex scripts in Perl and ksh. oExperience supporting DNS, DHCP, NFS,Application Load Balancing, and networking. oSolid technical understanding of Web Tools including Apache, Java, and Tomcat. oExcellent UNIX command line experience. What You Can Expect From Morgan Stanley We are committed to maintaining the first-class service and high standard of excellence that have defined Morgan Stanley for over 89 years. Our values - putting clients first, doing the right thing, leading with exceptional ideas, committing to diversity and inclusion, and giving back - aren’t just beliefs, they guide the decisions we make every day to do what's best for our clients, communities and more than 80,000 employees in 1,200 offices across 42 countries. At Morgan Stanley, you’ll find an opportunity to work alongside the best and the brightest, in an environment where you are supported and empowered. Our teams are relentless collaborators and creative thinkers, fueled by their diverse backgrounds and experiences. We are proud to support our employees and their families at every point along their work-life journey, offering some of the most attractive and comprehensive employee benefits and perks in the industry. There’s also ample opportunity to move about the business for those who show passion and grit in their work. To learn more about our offices across the globe, please copy and paste https://www.morganstanley.com/about-us/global-offices into your browser. Morgan Stanley is an equal opportunities employer. We work to provide a supportive and inclusive environment where all individuals can maximize their full potential. Our skilled and creative workforce is comprised of individuals drawn from a broad cross section of the global communities in which we operate and who reflect a variety of backgrounds, talents, perspectives, and experiences. Our strong commitment to a culture of inclusion is evident through our constant focus on recruiting, developing, and advancing individuals based on their skills and talents.

Posted 1 day ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

It's fun to work in a company where people truly BELIEVE in what they're doing! We're committed to bringing passion and customer focus to the business. Job Description This role requires working from our local Hyderabad office 2-3x a week. ABOUT THE ROLE: We are seeking a talented individual to join our team as a Java Backend Developer. The Java Backend Developer is self-driven and has a holistic, big picture mindset in developing enterprise solutions. In this role, he/she will be responsible for designing modern domain-driven, event-driven Microservices architecture to host on public Cloud platforms (AWS) and integration with modern technologies such as Kafka for event management/streaming, Docker & Kubernetes for Containerization. You will also be responsible for developing and supporting applications in Billing, Collections, and Payment Gateway within the commerce and club management Platform include assisting with the support of existing services as well as designing and implementing new business solutions, application deployment utilizing a thorough understanding of applicable technology, tools, and existing designs. The work involves working with product teams, technical leads, business analysts, DBAs, infrastructure, and other cross-department teams to evaluate business needs and provide end-to-end technical solutions. WHAT YOU’LL DO: Acting as a Java Backend Developer in a development team; collaborate with other team members and contribute in all phases of Software Development Life Cycle (SDLC) Applying Domain Driven Design, Object Oriented Design, and proven Design Patterns Hand on coding and development following Secured Coding guidelines and Test-Driven Development Working with QA teams to conduct integrated (application and database) stress testing, performance analysis and tuning Support systems testing and migration of platforms and applications to production Making enhancements to existing web applications built using Java and Spring frameworks Ensure quality, security and compliance requirements are met Act as an escalation point for application support and troubleshooting Have passion for hands-on coding, putting the customer first, and delivering an exceptional and reliable product to ABC Fitness’s customers Taking up tooling, integrating with other applications, piloting new technology Proof of Concepts and leveraging the outcomes in the ongoing solution initiatives Curious to see where technology and the industry is going and constantly strive to keep up through personal projects Strong analytical skills with high attention to detail, accuracy, and expert in debugging issue, and root cause analysis Strong organizational, multi-tasking, and prioritizing skills WHAT YOU’LL NEED: Computer Science degree or equivalent work experience Work experience as a senior developer in a team environment 3+ years of application development and implementation experience 3+ years of Java experience 3+ years of Spring experience Work experience in an Agile development scrum team space Work experience creating or maintaining RESTful or SOAP web services Work Experience creating and maintaining Cloud enabled/cloud native distributed applications Knowledge of API Gateways and integration frameworks, containers, and container orchestration Knowledge and experience with system application troubleshooting, and quality assurance application testing A focus on delivering outcomes to customers, which encompass designing, coding, ensuring quality, and delivering changes to our customers AND IT’S GREAT TO HAVE: 2+ years of SQL experience Billing or Payment Processing industry experience Knowledge and understanding of DevOps principles Knowledge and understanding of Cloud computing, PaaS design principles and micro services and containers Knowledge and understanding of application or software security such as: web application penetration testing, secure code review, secure static code analysis Ability to simultaneously lead multiple projects Good verbal, written, and interpersonal communication skills WHAT’S IN IT FOR YOU: Purpose led company with a Values focused culture – Best Life, One Team, Growth Mindset Time Off – competitive PTO plans with 15 Earned accrued leave, 12 days Sick leave, and 12 days Casual leave per year 11 Holidays plus 4 Days of Disconnect – once a quarter, we take a collective breather and enjoy a day off together around the globe. #oneteam Group Mediclaim insurance coverage of INR 500,000 for employee + spouse, 2 kids, and parents or parent-in-laws, and including EAP counseling Life Insurance and Personal Accident Insurance Best Life Perk – we are committed to meeting you wherever you are in your fitness journey with a quarterly reimbursement Premium Calm App – enjoy tranquility with a Calm App subscription for you and up to 4 dependents over the age of 16 Support for working women with financial aid towards crèche facility, ensuring a safe and nurturing environment for their little ones while they focus on their careers. We’re committed to diversity and passion, and encourage you to apply, even if you don’t demonstrate all the listed skillsets! ABC’S COMMITMENT TO DIVERSITY, EQUALITY, BELONGING AND INCLUSION: ABC is an equal opportunity employer. We celebrate diversity and are committed to creating an inclusive environment for all employees. We are intentional about creating an environment where employees, our clients and other stakeholders feel valued and inspired to reach their full potential and make authentic connections. We foster a workplace culture that embraces each person’s diversity, including the extent to which they are similar or different. ABC leaders believe that an equitable and inclusive culture is not only the right thing to do, it is a business imperative. Read more about our commitment to diversity, equality, belonging and inclusion at abcfitness.com ABOUT ABC: ABC Fitness (abcfitness.com) is the premier provider of software and related services for the fitness industry and has built a reputation for excellence in support for clubs and their members. ABC is the trusted provider to boost performance and create a total fitness experience for over 41 million members of clubs of all sizes whether a multi-location chain, franchise or an independent gym. Founded in 1981, ABC helps over 31,000 gyms and health clubs globally perform better and more profitably offering a comprehensive SaaS club management solution that enables club operators to achieve optimal performance. ABC Fitness is a Thoma Bravo portfolio company, a private equity firm focused on investing in software and technology companies (thomabravo.com). If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us!

Posted 1 day ago

Apply

7.0 years

0 Lacs

India

Remote

At Viamagus, we are looking for a Technical Engineering Lead – ML/AI who combines deep machine learning expertise with a hands-on, execution-oriented approach to leadership. You will lead by owning architecture, driving complex AI initiatives, and ensuring your team achieves ambitious goals. We’re seeking someone proactive, curious, and passionate about solving real-world problems using AI , including cutting-edge developments in Generative AI, LLMs, and intelligent agents. Key Responsibilities Drive the design, development, and deployment of models, including both traditional and generative (LLMs, agents, fine-tuning). Lead by example —contribute hands-on to model development, data pipelines, and solution implementation. Build and optimize data training and evaluation pipelines for large-scale, high-volume datasets. Translate complex business problems into scalable AI solutions, collaborating across teams. Use creative problem-solving techniques to address challenging engineering issues. Leverage Generative AI to build intelligent systems using LLMs , including prompt engineering, fine-tuning, and multi-agent systems. Develop and maintain robust data pipelines —covering ingestion, transformation, and storage (e.g., data lakes). Ensure clean, reliable input data through preprocessing, quality analysis, and feature engineering. Write and maintain high-quality Python code in production environments. Monitor, evaluate, and optimize model performance post-deployment. Must-Haves 4–7 years of hands-on experience in machine learning and AI, including leadership in technical execution. Strong proficiency in Python . Proven ability to build end-to-end ML pipelines , including training and deployment at scale. Deep understanding and hands-on experience with LLMs, prompt tuning, fine-tuning, and AI agents. Experience working with large datasets and performance-optimized data pipelines. Strong problem-solving ability and a passion for experimentation and learning. Good to Have Familiarity with ClickHouse or other analytical database systems. Exposure to tools such as FastAPI , MLFlow , Meltano , Prefect , or similar. Experience with MLOps and CI/CD for ML systems. Formal foundational education in ML/AI (e.g., academic coursework, certification, or specialization). Educational background in Mathematics, Statistics , or a related quantitative field. Mode: Full-time Location: Remote

Posted 1 day ago

Apply

7.0 years

0 Lacs

India

On-site

Welcome to Veradigm! Our Mission is to be the most trusted provider of innovative solutions that empower all stakeholders across the healthcare continuum to deliver world-class outcomes. Our Vision is a Connected Community of Health that spans continents and borders. With the largest community of clients in healthcare, Veradigm is able to deliver an integrated platform of clinical, financial, connectivity and information solutions to facilitate enhanced collaboration and exchange of critical patient information. Job Description We are looking for a highly skilled and motivated Senior Software Engineer with 4–7 years of hands-on experience in designing and developing enterprise-grade applications using .NET technologies . The ideal candidate should have strong expertise in C#, .NET Framework, and .NET Core , with a deep understanding of Object-Oriented Programming (OOP) principles. Proficiency in SQL Server for database design, development, and performance tuning is essential. The role requires someone who can contribute to high-quality software delivery in a collaborative and fast-paced environment. Responsibilities .NET & C# Expertise: Strong proficiency in C# and deep understanding of the .NET ecosystem, including .NET Framework and .NET Core. Proven experience in developing robust, high-performance, and scalable applications using object-oriented principles OOP Principles: Strong understanding of Object-Oriented Programming concepts and principles, applying them to design and implement efficient solutions. Algorithms & Data Structures: Strong understanding of core data structures such as arrays, lists, stacks, queues, trees, and hash maps. Familiarity with common algorithms including sorting, searching, recursion, and graph traversal is essential. SQL Server: Hands-on experience in SQL Server, including database design, optimization, and data manipulation. Hands-on with Modern .NET Stack: Must have hands-on experience with recent .NET technologies such as ASP.NET Core, Entity Framework Core, ASP.NET Web APIs, WPF, and .NET Framework 4.x. Agile and SAFe Knowledge: Experience working in Agile and SAFe environments, with a clear understanding of Agile methodologies. Azure Fundamentals: Familiarity with Azure fundamentals, and possession of the AZ-900 certification is considered a plus. Analytical and Problem-Solving Skills: Excellent analytical skills and the ability to solve complex problems efficiently. Communication Skills: Strong written and verbal communication skills to effectively collaborate with cross-functional teams. Industry Knowledge: Knowledge of Healthcare Domain and Revenue Cycle processes would be an added advantage. Qualifications Bachelor's Degree or equivalent Technical / Business experience. 4-7 years of hands-on experience in software development. We are an Equal Opportunity Employer. No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. Veradigm is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce. Thank you for reviewing this opportunity! Does this look like a great match for your skill set? If so, please scroll down and tell us more about yourself!

Posted 1 day ago

Apply

11.0 years

0 Lacs

India

On-site

Company Description 👋🏼 We're Nagarro. We are a Digital Product Engineering company that is scaling in a big way! We build products, services, and experiences that inspire, excite, and delight. We work at scale — across all devices and digital mediums, and our people exist everywhere in the world (17500+ experts across 39 countries, to be exact). Our work culture is dynamic and non-hierarchical. We're looking for great new colleagues. That's where you come in! Job Description REQUIREMENTS: Total experience 11+years. Strong working experience with architecture and development in Java 8 or higher. Experience with front-end frameworks such as React, Redux, Angular, or Vue. Familiarity with Node.js and modern backend stacks. Deep knowledge of AWS, Azure, or GCP platforms and services. Hands-on experience with CI/CD pipelines, containerization (Docker, Kubernetes), and microservices. Deep understanding of design patterns, data structures, and microservices architecture. Strong knowledge of object-oriented programming, data structures, and algorithms. Experience with scalable system design, performance tuning, and application security. Experience integrating with SAP ERP systems, Net Revenue Management platforms, and O9 Familiarity with data integration patterns, middleware, and message brokers (e.g., Kafka, RabbitMQ). A good understanding of UML and design patterns. Excellent communication and stakeholder management skills. RESPONSIBILITIES: Writing and reviewing great quality code Understanding functional requirements thoroughly and analyzing the client’s needs in the context of the project Envisioning the overall solution for defined functional and non-functional requirements, and being able to define technologies, patterns and frameworks to realize it Determining and implementing design methodologies and tool sets Enabling application development by coordinating requirements, schedules, and activities. Being able to lead/support UAT and production roll outs Creating, understanding and validating WBS and estimated effort for given module/task, and being able to justify it Addressing issues promptly, responding positively to setbacks and challenges with a mindset of continuous improvement Giving constructive feedback to the team members and setting clear expectations. Helping the team in troubleshooting and resolving complex bugs Coming up with solutions to any issue that is raised during code/design review and being able to justify the decision taken Carrying out POCs to make sure that suggested design/technologies meet the requirements. Qualifications Bachelor’s or master’s degree in computer science, Information Technology, or a related field.

Posted 1 day ago

Apply

1.0 - 4.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Brief Description About the Role: We are looking for a Junior Azure API Management (APIM) Specialist with a strong foundation in .NET development and hands-on experience in Azure services. This is a great opportunity for early-career professionals to gain experience in designing, deploying, and managing APIs in the Microsoft Azure ecosystem. Experience Required: 1 to 4 years (Junior-Level Position) Location: Remote (Preference for candidates based in Kerala, able to travel to our Trivandrum office on short notice if required) Key Responsibilities: Develop, manage, and maintain APIs using Azure API Management (APIM) . Contribute to the full API lifecycle—design, implementation, testing, deployment, and versioning. Collaborate with cross-functional teams to integrate APIs with web applications and backend services developed in .NET . Monitor API performance and contribute to tuning and optimization activities. Apply basic API security principles including API key handling, OAuth, and role-based access control. Work with Azure DevOps for code integration and automated deployments. Write clean, maintainable, and well-documented code in C# / .NET Core . Use SQL for interacting with databases and supporting data-driven services. Required Skills and Qualifications: 1–4 years of professional experience in application development. At least 1 year of hands-on experience in Azure API Management (APIM) . Strong programming skills in .NET / .NET Core / C# . Working knowledge of Azure DevOps , CI/CD pipelines, and Git. Good understanding of RESTful API design and backend integration patterns. Proficiency in SQL and understanding of relational database concepts. Good communication skills and a collaborative attitude. Bachelor’s degree in Computer Science, Information Technology, or a related field. Preferred (Good to Have): Exposure to Azure Logic Apps , Functions , and Data Factory . Familiarity with API security concepts such as JWT , OAuth2 , and rate limiting. Experience working in Agile/Scrum environments.

Posted 1 day ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist. In this role, you will: Design & Build high quality APIs that are scalable and global at the core Build custom policies, frameworks/components, error handling, transaction tracing Setup Exchange catalogue orgs and assets in Any Point Platform. Setup security models and policies for consumers and producers of API and catalog assets Work across various platforms and the associated stakeholders’/business users Design, develop, test and implement technical solutions based on business requirements and strategic direction. Collaborate with other Development teams, Enterprise Architecture and Support teams to design, develop, test and maintain the various platforms and their integration with other systems Communicate with technical and non-technical groups on a regular basis as part of product/project support Responsible to support production releases/support on need basis. Peer Review, CI/CD pipeline implementation and Service monitoring. ITSO Delegate for the application/s. Should have flexible in working hours, ready to work in shift and On call once in a month 24*7 one week on-call production support including weekends. Requirements To be successful in this role, you should meet the following requirements: Person should have more than 8 years of experience in s/w development, design using java/j2ee technologies with hands on experience on complete spring stack and API implementation on Cloud (GCP/AWS) Should have hands on experience on K8 (Kubernetes) / DOCKERS. Experience in MQ, Sonar, API Gateway Experience in developing large-scale integration and API solutions Experience in working with API Management, ARM, Exchange and Access Management modules Experience in understanding and analyzing complex business requirements and carry out the system design accordingly. Extensive knowledge on building REST based APIs. Good Knowledge on API documentation (RAML/Swagger/OAS) Extensive knowledge on micro-services architecture with hands-on experience in implementing the same using Spring-boot. Good knowledge on security, scaling, performance tuning aspects of micro services Good understanding of SQL/NoSQL Databases. Good understanding of Messaging platform like Kafka, PubSub etc. Optional understanding of Cloud platforms. Fair understanding of DevOps concepts Experience in creating custom policies and custom connectors Excellent verbal and written communication skills, both technical and non-technical. Work on POCs Experience to handle the support projects. Spring boot, ORM tool knowledge (e.g. Hibernate), Web Services You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India

Posted 1 day ago

Apply

4.0 - 10.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Key Areas of Responsibilities Develop and maintain web applications using Angular, React, or Vue frameworks. Implement responsive designs for better usability. Ensure cross-browser compatibility for a seamless user experience. Have extensive experience with Core Java, OOP principles, and Data Structures/Algorithms. Possess proficiency with Oracle databases and SQL. Collaborate with cross-functional teams to understand requirements and translate them into functional applications. Work across global teams to ensure alignment on project goals and deliverables. Optimize applications for fast loading times and performance. Identify bottlenecks and develop solutions to enhance performance. Build automated tests for UI components to improve code coverage. Participate in code reviews to maintain high code quality standards. Stay updated on the latest front-end trends and technologies. Recommend and implement improvements to project architecture and workflows. Requirement s4-10 years of professional experience in JavaScript, Angular/React UI frameworks, and backend technologies such as Core Java, Spring Boot, OOP, and Chronicle Map .Strong foundational knowledge in Data Structures, Design Patterns, Networking, and Operating Systems .Hands-on experience with automated testing tools like Jest, Jasmine, or similar frameworks .GC and performance tuning expertise is a plus; experience with low latency and real-time applications is beneficial .Capable of working efficiently in fast-paced, dynamic environments with minimal supervision; committed to self-learning new technologies and best practices .Proficient with development tools such as Git, Maven, and IntelliJ; experienced in practicing SDLC and managing releases .

Posted 1 day ago

Apply

10.0 years

0 Lacs

Gurgaon, Haryana, India

On-site

Senior Data Specialist Primary Skills About Brillio: Brillio is one of the fastest growing digital technology service providers and a partner of choice for many Fortune 1000 companies seeking to turn disruption into a competitive advantage through innovative digital adoption. Brillio, renowned for its world-class professionals, referred to as "Brillians", distinguishes itself through their capacity to seamlessly integrate cutting-edge digital and design thinking skills with an unwavering dedication to client satisfaction. Brillio takes pride in its status as an employer of choice, consistently attracting the most exceptional and talented individuals due to its unwavering emphasis on contemporary, groundbreaking technologies, and exclusive digital projects. Brillio's relentless commitment to providing an exceptional experience to its Brillians and nurturing their full potential consistently garners them the Great Place to Work® certification year after year. Lead and manage a team of ETL developers and deliver high-quality deliverables Collaborate with business and technical stakeholders to gather requirements and translate them into technical specifications Hands-on expertise in Advanced SQL for data extraction, transformation, and validation tasks Strong understanding of Data Warehousing concepts. Experience working with Snowflake for building and managing cloud-based data platforms Design, develop, and optimize ETL workflows using Informatica Intelligent Cloud Services (IICS) Ensure data quality, performance tuning, and best practices in ETL processes Provide technical guidance, mentoring, and code reviews for the ETL team Manage task prioritization, timelines, and issue resolution in a fast-paced environment Coordinate deployments and support activities across development, QA, and production environments Having Python Knowledge is a plus Specialization ETL Specialization: Data Specialist Job requirements Role – Data Specialist Years of Experience –10+ years Strong hands-on in Informatica Advanced SQL , PowerCenter ,and IICS Strong knowledge on Data model, Data warehouse, Worked in ETL job optimization techniques(mainly Pushdown optimization) Worked on ETL error troubleshooting. Worked on Shell script. Worked on Snowflake. Expertise on SQL / Advanced SQL queries. A good team player/lead and quick learner Python knowledge is added advantage Tech skills – Informatica Power Center, IICS, Snowflake, Advanced SQL, Shell Scripting

Posted 1 day ago

Apply

0 years

25 - 32 Lacs

Gurgaon, Haryana, India

On-site

Company: Sun King Website: Visit Website Business Type: Enterprise Company Type: Product & Service Business Model: Others Funding Stage: Series D+ Industry: Eenewable Energy Salary Range: ₹ 25-32 Lacs PA Job Description About the role: Sun King is looking for a self-driven Infrastructure engineer, who is comfortable working in a fast-paced startup environment and balancing the needs of multiple development teams and systems. You will work on improving our current IAC, observability stack, and incident response processes. You will work with the data science, analytics, and engineering teams to build optimized CI/CD pipelines, scalable AWS infrastructure, and Kubernetes deployments. What You Would Be Expected To Do Work with engineering, automation, and data teams to work on various infrastructure requirements. Designing modular and efficient GitOps CI/CD pipelines, agnostic to the underlying platform. Managing AWS services for multiple teams. Managing custom data store deployments like sharded MongoDB clusters, Elasticsearch clusters, and upcoming services. Deployment and management of Kubernetes resources. Deployment and management of custom metrics exporters, trace data, custom application metrics, and designing dashboards, querying metrics from multiple resources, as an end-to-end observability stack solution. Set up incident response services and design effective processes. Deployment and management of critical platform services like OPA and Keycloak for IAM. Advocate best practices for high availability and scalability when designing AWS infrastructure, observability dashboards, implementing IAC, deploying to Kubernetes, and designing GitOps CI/CD pipelines. You Might Be a Strong Candidate If You Have/are Hands-on experience with Docker or any other container runtime environment and Linux with the ability to perform basic administrative tasks. Experience working with web servers (nginx, apache) and cloud providers (preferably AWS). Hands-on scripting and automation experience (Python, Bash), experience debugging and troubleshooting Linux environments and cloud-native deployments. Experience building CI/CD pipelines, with familiarity with monitoring & alerting systems (Grafana, Prometheus, and exporters). Knowledge of web architecture, distributed systems, and single points of failure. Familiarity with cloud-native deployments and concepts like high availability, scalability, and bottleneck. Good networking fundamentals - SSH, DNS, TCP/IP, HTTP, SSL, load balancing, reverse proxies, and firewalls. Good To Have Experience with backend development and setting up databases and performance tuning using parameter groups. Working experience in Kubernetes cluster administration and Kubernetes deployments. Experience working alongside sec ops engineers. Basic knowledge of Envoy, service mesh (Istio), and SRE concepts like distributed tracing. Setup and usage of open telemetry, central logging, and monitoring systems.

Posted 1 day ago

Apply

0 years

25 - 30 Lacs

Gurgaon, Haryana, India

On-site

Company: Sun King Website: Visit Website Business Type: Enterprise Company Type: Product & Service Business Model: Others Funding Stage: Series D+ Industry: Renewable Energy Salary Range: ₹ 25-30 Lacs PA Job Description About the role: Sun King is looking for a self-driven Infrastructure engineer, who is comfortable working in a fast-paced startup environment and balancing the needs of multiple development teams and systems. You will work on improving our current IAC, observability stack, and incident response processes. You will work with the data science, analytics, and engineering teams to build optimized CI/CD pipelines, scalable AWS infrastructure, and Kubernetes deployments. What You Would Be Expected To Do Work with engineering, automation, and data teams to work on various infrastructure requirements. Designing modular and efficient GitOps CI/CD pipelines, agnostic to the underlying platform. Managing AWS services for multiple teams. Managing custom data store deployments like sharded MongoDB clusters, Elasticsearch clusters, and upcoming services. Deployment and management of Kubernetes resources. Deployment and management of custom metrics exporters, trace data, custom application metrics, and designing dashboards, querying metrics from multiple resources, as an end-to-end observability stack solution. Set up incident response services and design effective processes. Deployment and management of critical platform services like OPA and Keycloak for IAM. Advocate best practices for high availability and scalability when designing AWS infrastructure, observability dashboards, implementing IAC, deploying to Kubernetes, and designing GitOps CI/CD pipelines. You Might Be a Strong Candidate If You Have/are Hands-on experience with Docker or any other container runtime environment and Linux with the ability to perform basic administrative tasks. Experience working with web servers (nginx, apache) and cloud providers (preferably AWS). Hands-on scripting and automation experience (Python, Bash), experience debugging and troubleshooting Linux environments and cloud-native deployments. Experience building CI/CD pipelines, with familiarity with monitoring & alerting systems (Grafana, Prometheus, and exporters). Knowledge of web architecture, distributed systems, and single points of failure. Familiarity with cloud-native deployments and concepts like high availability, scalability, and bottleneck. Good networking fundamentals - SSH, DNS, TCP/IP, HTTP, SSL, load balancing, reverse proxies, and firewalls. Good To Have Experience with backend development and setting up databases and performance tuning using parameter groups. Working experience in Kubernetes cluster administration and Kubernetes deployments. Experience working alongside sec ops engineers. Basic knowledge of Envoy, service mesh (Istio), and SRE concepts like distributed tracing. Setup and usage of open telemetry, central logging, and monitoring systems.

Posted 1 day ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Before you apply, make sure that your resumes have access else your resume will NOT be considered. About the Role We are seeking a Senior AI/ML Engineer to design, fine-tune, and deploy large-scale AI/ML systems. You will work on QLoRA-based fine-tuning, VLLM inference, distributed training , and cloud-native deployments on platforms like AWS SageMaker and GCP Vertex AI Agent Builder . The role will also involve developing applied AI systems , including computer vision solutions using YOLO and OpenCV , and building scalable event-driven pipelines leveraging Cloud Pub/Sub . Key Responsibilities Fine-tune and optimize LLMs using QLoRA, PEFT, and Hugging Face Accelerators . Implement VLLM for efficient large-scale inference. Build distributed and parallel training systems (DeepSpeed, Ray, PyTorch DDP). Develop computer vision models using YOLO and OpenCV for real-world applications. Deploy and manage AI/ML models on AWS SageMaker, GCP Vertex AI , and other cloud MLOps platforms . Design and implement event-driven AI pipelines with Cloud Pub/Sub and other messaging systems. Define and track LLM and CV evaluation metrics (accuracy, factuality, hallucination rate, object detection performance). Integrate graph-based LLM tools for knowledge reasoning and multi-agent AI systems . Requirements 6+ years in AI/ML, with 3+ years in LLM and applied computer vision systems . Expertise in Python, PyTorch, Hugging Face, QLoRA/LoRA, VLLM, YOLO, and OpenCV . Experience with distributed systems, AWS/GCP cloud MLOps , and vector databases (FAISS/Milvus) . Familiarity with LangChain/LlamaIndex , agent-based AI frameworks , and event-driven architectures (Cloud Pub/Sub) . Strong track record of deploying AI/ML models into production at scale .

Posted 1 day ago

Apply

0 years

0 Lacs

Navi Mumbai, Maharashtra, India

On-site

We deliver the world’s most complex projects Work as part of a collaborative and inclusive team Enjoy a varied & challenging role Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts headquartered in Australia. Right now, we’re bridging two worlds as we accelerate to more sustainable energy sources, while helping our customers provide the energy, chemicals and resources that society needs now. We partner with our customers to deliver projects and create value over the life of their portfolio of assets. We solve complex problems by finding integrated data-centric solutions from the first stages of consulting and engineering to installation and commissioning, to the last stages of decommissioning and remediation. Join us and help drive innovation and sustainability in our projects. The Role As a Senior Cyber Security Analyst with Worley, you will work closely with our existing team to deliver projects for our clients while continuing to develop your skills and experience etc. We are seeking a Senior Cyber Security Analyst -a Subject Matter Expert (SME)- to join our Information Security team. The ideal candidate will be responsible for designing, implementing, monitoring, reacting, and reporting on information security events within the DLP scope. Additionally, the role includes managing security tools and IT systems with a special focus on DLP enablement. DLP Strategy & Policy Design Serve as a Subject Matter Expert (SME) for DLP solutions, technologies, and best practices. Design, implement, and optimize DLP policies to detect and prevent unauthorized access, sharing, and data exfiltration. Define and maintain DLP governance frameworks, aligning with regulatory requirements Identify sensitive data requiring protection across endpoints, cloud, email, and network. Implementation & Configuration Deploy and configure DLP controls to monitor, alert, and block potential data leaks. Define and enforce DLP rules for structured & unstructured data, including Personally Identifiable Information (PII), Intellectual Property (IP), and financial data. Integrate DLP solutions with other security tools. Monitoring & Continuous Improvement Monitor and analyze DLP alerts and incidents, identifying trends and areas for improvement. Ensuring DLP alerts and incidents get routed to monitoring/response processes in accordance with defined internal procedures Perform regular tuning and updates to enhance detection accuracy and reduce false positives. Develop automated response actions to mitigate risks and ensure business continuity. Compliance & Stakeholder Collaboration Ensure compliance with data protection regulations and industry security standards. Collaborate with cross-functional teams to resolve complex technical issues and to align DLP policies with business needs. Provide guidance and training to employees on DLP policies, security best practices, and insider threat awareness. Reporting & Documentation Define and generate DLP metric supporting the reporting needs across the organization Document DLP configurations, policies, and operational procedures. Provide technical recommendations to enhance data security strategies. About You To be considered for this role it is envisaged you will possess the following attributes: Ability to balance security measures with business needs A proactive approach to identifying and mitigating data loss risks before they become security incidents Proven experience with DLP solutions (e.g., Microsoft Purview, Symantec, Forcepoint, McAfee/Trellix, Digital Guardian, Zscaler). Strong knowledge of DLP policies, rules, content inspection techniques, and data classification models. Experience working with cloud-based DLP (e.g., CASB, SaaS security, O365 DLP, Google Workspace DLP) Understanding of network security, endpoint security, and encryption techniques. Familiarity with SIEM, SOC workflows, and incident response processes. Moving forward together We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. We’re building a diverse, inclusive and respectful workplace. Creating a space where everyone feels they belong, can be themselves, and are heard. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Worley takes personal data protection seriously and respects EU and local data protection laws. You can read our full Recruitment Privacy Notice Here. Please note: If you are being represented by a recruitment agency you will not be considered, to be considered you will need to apply directly to Worley. Company Worley Primary Location IND-MM-Mumbai Other Locations IND-KR-Bangalore, IND-AP-Hyderabad, IND-MM-Pune, IND-MM-Navi Mumbai Job Cyber Security Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 14, 2025 Unposting Date Aug 13, 2025 Reporting Manager Title Manager

Posted 1 day ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts. We partner with customers to deliver projects and create value over the life of their assets. We’re bridging two worlds, moving towards more sustainable energy sources, while helping to provide the energy, chemicals and resources needed now. At Worley, our Digital team collaborates closely with the business to deliver efficient, technology-enabled sustainable solutions, that will be transformational for Worley. This team, aptly named Worley Digital, is currently seeking talented individuals who would be working on a wide range of latest technologies, including solutions based on Automation, Generative AI. What drives us at Worley Digital? It’s our shared passion for pushing the boundaries of technological innovation, embracing best practices, and propelling Worley to the forefront of industry advancements. If you’re naturally curious, open-minded, and a self-motivated learner - one who’s ready to invest time and effort to stay future-ready - then Worley could be your ideal workplace. Position Title (Global): Sr ML Engineer MAJOR ACCOUNTABILITIES OF POSITION: Understanding business objectives and developing models that help to achieve them, along with metrics to track their progress Utilize the existing frameworks, standards, patterns to create architectural foundation and services necessary for AI applications that scale from multi-user to enterprise class Managing Data Science project life cycle from exploratory data analysis to productization (Alpha/Beta Release) .Manage small team to collaborate with Architecture, Data Warehouse, Data Governance teams for providing analytics as service. Mentor team member for AI/ML development Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Finding available datasets online that could be used for training Defining validation strategies Defining the preprocessing or feature engineering to be done on a given dataset. Defining data augmentation pipelines Training models and tuning their hyperparameters Analyzing the errors of the model and designing strategies to overcome them Deploying models to production. Development of the ML algorithms that could be used to solve a given problem and ranking them by their success probability. Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Knowledge / Experience / Competencies Required IT Skills & Experience (Priority wise): Proven experience as a Data Scientist– AI/ML or similar role Ability to write robust code in Python. Experience in the Generative AI components like LLMs, LangChain, LlamaIndex, OpenAI, Mistral, Llama etc. Experience in supervised/semi-supervised and unsupervised machine learning algorithms. Experience using the cognitive APIs machine learning studios on cloud. Up to speed on NLP (Summarization, Translation models, Named Entity Recognition) Hands-on knowledge of image processing with deep learning (CNN, RNN, LSTM, GAN) Understanding of complete AI/ML project life cycle. Understanding of data structures, data modelling and software architecture. People Skills: Ability to communicate clearly and concisely and a flexible mindset to handle a quickly changing culture Ability to work independently and/or as part of cross-domain big team Professional and open communication to all internal and external interfaces. Accurately report to management in a timely and effective manner. Other Skills: Outstanding analytical and problem-solving skills Taking ownership of the tasks in hand and being accountable for deliverables. Education – Qualifications, Accreditation, Training: Minimum 4–7 years’ experience as a Data Scientist on AI and ML projects. Master’s in information technology / Big Data/Data Science/AI/Computer Science or related field Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Education – Qualifications, Accreditation, Training: Master’s in Information Technology / Big Data/Data Science/AI/Computer Science Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Company Worley Primary Location IND-MM-Navi Mumbai Other Locations IND-KR-Bangalore, IND-MM-Mumbai, IND-MM-Pune, IND-TN-Chennai, IND-GJ-Vadodara, IND-AP-Hyderabad, IND-WB-Kolkata Job Digital Platforms & Data Science Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 4, 2025 Unposting Date Aug 3, 2025 Reporting Manager Title Senior Manager

Posted 1 day ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts. We partner with customers to deliver projects and create value over the life of their assets. We’re bridging two worlds, moving towards more sustainable energy sources, while helping to provide the energy, chemicals and resources needed now. Worley Digital At Worley, our Digital team collaborates closely with the business to deliver efficient, technology-enabled sustainable solutions, that will be transformational for Worley. This team, aptly named Worley Digital, is currently seeking talented individuals who would be working on a wide range of latest technologies, including solutions based on Automation, Generative AI. What drives us at Worley Digital? It’s our shared passion for pushing the boundaries of technological innovation, embracing best practices, and propelling Worley to the forefront of industry advancements. If you’re naturally curious, open-minded, and a self-motivated learner - one who’s ready to invest time and effort to stay future-ready - then Worley could be your ideal workplace. Position Title (Global): Data Scientist II MAJOR ACCOUNTABILITIES OF POSITION: Understanding business objectives and developing models that help to achieve them, along with metrics to track their progress Utilize the existing frameworks, standards, patterns to create architectural foundation and services necessary for AI applications that scale from multi-user to enterprise class Managing Data Science project life cycle from exploratory data analysis to productization (Alpha/Beta Release) .Manage small team to collaborate with Architecture, Data Warehouse, Data Governance teams for providing analytics as service. Mentor team member for AI/ML development Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Finding available datasets online that could be used for training Defining validation strategies Defining the preprocessing or feature engineering to be done on a given dataset. Defining data augmentation pipelines Training models and tuning their hyperparameters Analyzing the errors of the model and designing strategies to overcome them Deploying models to production. Development of the ML algorithms that could be used to solve a given problem and ranking them by their success probability. Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Knowledge / Experience / Competencies Required IT Skills & Experience (Priority wise): Proven experience as a Data Scientist– AI/ML or similar role Ability to write robust code in Python. Experience in the Generative AI components like LLMs, LangChain, LlamaIndex, OpenAI, Mistral, Llama etc. Experience in supervised/semi-supervised and unsupervised machine learning algorithms. Experience using the cognitive APIs machine learning studios on cloud. Up to speed on NLP (Summarization, Translation models, Named Entity Recognition) Hands-on knowledge of image processing with deep learning (CNN, RNN, LSTM, GAN) Understanding of complete AI/ML project life cycle. Understanding of data structures, data modelling and software architecture. People Skills: Ability to communicate clearly and concisely and a flexible mindset to handle a quickly changing culture Ability to work independently and/or as part of cross-domain big team Professional and open communication to all internal and external interfaces. Accurately report to management in a timely and effective manner. Other Skills: Outstanding analytical and problem-solving skills Taking ownership of the tasks in hand and being accountable for deliverables. Education – Qualifications, Accreditation, Training: Minimum 4–7 years’ experience as a Data Scientist on AI and ML projects. Master’s in information technology / Big Data/Data Science/AI/Computer Science or related field Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Education – Qualifications, Accreditation, Training: Master’s in information technology / Big Data/Data Science/AI/Computer Science or related field Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Company Worley Primary Location IND-MM-Navi Mumbai Other Locations IND-KR-Bangalore, IND-MM-Mumbai, IND-MM-Pune, IND-TN-Chennai, IND-GJ-Vadodara, IND-AP-Hyderabad, IND-WB-Kolkata Job Digital Platforms & Data Science Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 4, 2025 Unposting Date Aug 3, 2025 Reporting Manager Title Senior Manager

Posted 1 day ago

Apply

7.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts. We partner with customers to deliver projects and create value over the life of their assets. We’re bridging two worlds, moving towards more sustainable energy sources, while helping to provide the energy, chemicals and resources needed now. At Worley, our Digital team collaborates closely with the business to deliver efficient, technology-enabled sustainable solutions, that will be transformational for Worley. This team, aptly named Worley Digital, is currently seeking talented individuals who would be working on a wide range of latest technologies, including solutions based on Automation, Generative AI. What drives us at Worley Digital? It’s our shared passion for pushing the boundaries of technological innovation, embracing best practices, and propelling Worley to the forefront of industry advancements. If you’re naturally curious, open-minded, and a self-motivated learner - one who’s ready to invest time and effort to stay future-ready - then Worley could be your ideal workplace. Position Title (Global): Sr ML Engineer MAJOR ACCOUNTABILITIES OF POSITION: Understanding business objectives and developing models that help to achieve them, along with metrics to track their progress Utilize the existing frameworks, standards, patterns to create architectural foundation and services necessary for AI applications that scale from multi-user to enterprise class Managing Data Science project life cycle from exploratory data analysis to productization (Alpha/Beta Release) .Manage small team to collaborate with Architecture, Data Warehouse, Data Governance teams for providing analytics as service. Mentor team member for AI/ML development Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Finding available datasets online that could be used for training Defining validation strategies Defining the preprocessing or feature engineering to be done on a given dataset. Defining data augmentation pipelines Training models and tuning their hyperparameters Analyzing the errors of the model and designing strategies to overcome them Deploying models to production. Development of the ML algorithms that could be used to solve a given problem and ranking them by their success probability. Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Knowledge / Experience / Competencies Required IT Skills & Experience (Priority wise): Proven experience as a Data Scientist– AI/ML or similar role Ability to write robust code in Python. Experience in the Generative AI components like LLMs, LangChain, LlamaIndex, OpenAI, Mistral, Llama etc. Experience in supervised/semi-supervised and unsupervised machine learning algorithms. Experience using the cognitive APIs machine learning studios on cloud. Up to speed on NLP (Summarization, Translation models, Named Entity Recognition) Hands-on knowledge of image processing with deep learning (CNN, RNN, LSTM, GAN) Understanding of complete AI/ML project life cycle. Understanding of data structures, data modelling and software architecture. People Skills: Ability to communicate clearly and concisely and a flexible mindset to handle a quickly changing culture Ability to work independently and/or as part of cross-domain big team Professional and open communication to all internal and external interfaces. Accurately report to management in a timely and effective manner. Other Skills: Outstanding analytical and problem-solving skills Taking ownership of the tasks in hand and being accountable for deliverables. Education – Qualifications, Accreditation, Training: Minimum 4–7 years’ experience as a Data Scientist on AI and ML projects. Master’s in information technology / Big Data/Data Science/AI/Computer Science or related field Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Education – Qualifications, Accreditation, Training: Master’s in Information Technology / Big Data/Data Science/AI/Computer Science Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Company Worley Primary Location IND-MM-Navi Mumbai Other Locations IND-KR-Bangalore, IND-MM-Mumbai, IND-MM-Pune, IND-TN-Chennai, IND-GJ-Vadodara, IND-AP-Hyderabad, IND-WB-Kolkata Job Digital Platforms & Data Science Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 4, 2025 Unposting Date Aug 3, 2025 Reporting Manager Title Senior Manager

Posted 1 day ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Building on our past. Ready for the future Worley is a global professional services company of energy, chemicals and resources experts. We partner with customers to deliver projects and create value over the life of their assets. We’re bridging two worlds, moving towards more sustainable energy sources, while helping to provide the energy, chemicals and resources needed now. Worley Digital At Worley, our Digital team collaborates closely with the business to deliver efficient, technology-enabled sustainable solutions, that will be transformational for Worley. This team, aptly named Worley Digital, is currently seeking talented individuals who would be working on a wide range of latest technologies, including solutions based on Automation, Generative AI. What drives us at Worley Digital? It’s our shared passion for pushing the boundaries of technological innovation, embracing best practices, and propelling Worley to the forefront of industry advancements. If you’re naturally curious, open-minded, and a self-motivated learner - one who’s ready to invest time and effort to stay future-ready - then Worley could be your ideal workplace. Position Title (Global): Data Scientist II MAJOR ACCOUNTABILITIES OF POSITION: Understanding business objectives and developing models that help to achieve them, along with metrics to track their progress Utilize the existing frameworks, standards, patterns to create architectural foundation and services necessary for AI applications that scale from multi-user to enterprise class Managing Data Science project life cycle from exploratory data analysis to productization (Alpha/Beta Release) .Manage small team to collaborate with Architecture, Data Warehouse, Data Governance teams for providing analytics as service. Mentor team member for AI/ML development Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Finding available datasets online that could be used for training Defining validation strategies Defining the preprocessing or feature engineering to be done on a given dataset. Defining data augmentation pipelines Training models and tuning their hyperparameters Analyzing the errors of the model and designing strategies to overcome them Deploying models to production. Development of the ML algorithms that could be used to solve a given problem and ranking them by their success probability. Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Knowledge / Experience / Competencies Required IT Skills & Experience (Priority wise): Proven experience as a Data Scientist– AI/ML or similar role Ability to write robust code in Python. Experience in the Generative AI components like LLMs, LangChain, LlamaIndex, OpenAI, Mistral, Llama etc. Experience in supervised/semi-supervised and unsupervised machine learning algorithms. Experience using the cognitive APIs machine learning studios on cloud. Up to speed on NLP (Summarization, Translation models, Named Entity Recognition) Hands-on knowledge of image processing with deep learning (CNN, RNN, LSTM, GAN) Understanding of complete AI/ML project life cycle. Understanding of data structures, data modelling and software architecture. People Skills: Ability to communicate clearly and concisely and a flexible mindset to handle a quickly changing culture Ability to work independently and/or as part of cross-domain big team Professional and open communication to all internal and external interfaces. Accurately report to management in a timely and effective manner. Other Skills: Outstanding analytical and problem-solving skills Taking ownership of the tasks in hand and being accountable for deliverables. Education – Qualifications, Accreditation, Training: Minimum 4–7 years’ experience as a Data Scientist on AI and ML projects. Master’s in information technology / Big Data/Data Science/AI/Computer Science or related field Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Education – Qualifications, Accreditation, Training: Master’s in information technology / Big Data/Data Science/AI/Computer Science or related field Moving forward together We’re committed to building a diverse, inclusive and respectful workplace where everyone feels they belong, can bring themselves, and are heard. We provide equal employment opportunities to all qualified applicants and employees without regard to age, race, creed, color, religion, sex, national origin, ancestry, disability status, veteran status, sexual orientation, gender identity or expression, genetic information, marital status, citizenship status or any other basis as protected by law. We want our people to be energized and empowered to drive sustainable impact. So, our focus is on a values-inspired culture that unlocks brilliance through belonging, connection and innovation. And we're not just talking about it; we're doing it. We're reskilling our people, leveraging transferable skills, and supporting the transition of our workforce to become experts in today's low carbon energy infrastructure and technology. Whatever your ambition, there’s a path for you here. And there’s no barrier to your potential career success. Join us to broaden your horizons, explore diverse opportunities, and be part of delivering sustainable change. Company Worley Primary Location IND-MM-Navi Mumbai Other Locations IND-KR-Bangalore, IND-MM-Mumbai, IND-MM-Pune, IND-TN-Chennai, IND-GJ-Vadodara, IND-AP-Hyderabad, IND-WB-Kolkata Job Digital Platforms & Data Science Schedule Full-time Employment Type Employee Job Level Experienced Job Posting Jul 4, 2025 Unposting Date Aug 3, 2025 Reporting Manager Title Senior Manager

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies