Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
We’re now looking for a Senior DevOps Engineer to join our fast-growing, remote-first team. If you're passionate about automation, scalable cloud systems, and supporting high-impact AI workloads, we’d love to connect. What You'll Do (Responsibilities):Design, implement, and manage scalable, secure, and high-performance cloud-native infrastructure across Azure.Build and maintain Infrastructure as Code (IaC) using Terraform or CloudFormation.Develop event-driven and serverless architectures using AWS Lambda, SQS, and SAM.Architect and manage containerized applications using Docker, Kubernetes, ECR, ECS, or AKS.Establish and optimize CI/CD pipelines using GitHub Actions, Jenkins, AWS CodeBuild & CodePipeline.Set up and manage monitoring, logging, and alerting using Prometheus + Grafana, Datadog, and centralized logging systems.Collaborate with ML Engineers and Data Engineers to support MLOps pipelines (Airflow, ML Pipelines) and Bedrock with Tensorflow or PyTorch.Implement and optimize ETL/data streaming pipelines using Kafka, EventBridge, and Event Hubs.Automate operations and system tasks using Python and Bash, along with Cloud CLIs and SDKs.Secure infrastructure using IAM/RBAC and follow best practices in secrets management and access control.Manage DNS and networking configurations using Cloudflare, VPC, and PrivateLink.Lead architecture implementation for scalable and secure systems, aligning with business and AI solution needs.Conduct cost optimization through budgeting, alerts, tagging, right-sizing resources, and leveraging spot instances.Contribute to backend development in Python (Web Frameworks), REST/Socket and gRPC design, and testing (unit/integration).Participate in incident response, performance tuning, and continuous system improvement. Good to Have:Hands-on experience with ML lifecycle tools like MLflow and KubeflowPrevious involvement in production-grade AI/ML projects or data-intensive systemsStartup or high-growth tech company experience Qualifications:Bachelor’s degree in Computer Science, Information Technology, or a related field.5+ years of hands-on experience in a DevOps, SRE, or Cloud Infrastructure role.Proven expertise in multi-cloud environments (AWS, Azure, GCP) and modern DevOps tooling.Strong communication and collaboration skills to work across engineering, data science, and product teams. Benefits:Competitive Salary 💸Support for continual learning (free books and online courses) 📚Leveling Up Opportunities 🌱Diverse team environment 🌍
Posted 1 month ago
5 - 8 years
0 Lacs
Pune, Maharashtra, India
Role Overview We are looking for an experienced MLOps Engineer to join our growing AI/ML team. You will be responsible for automating, monitoring, and managing machine learning workflows and infrastructure in production environments. This role is key to ensuring our AI solutions are scalable, reliable, and continuously improving. Key Responsibilities Design, build, and manage end-to-end ML pipelines, including model training, validation, deployment, and monitoring. Collaborate with data scientists, software engineers, and DevOps teams to integrate ML models into production systems. Develop and manage scalable infrastructure using AWS, particularly AWS Sagemaker. Automate ML workflows using CI/CD best practices and tools. Ensure model reproducibility, governance, and performance tracking. Monitor deployed models for data drift, model decay, and performance metrics. Implement robust versioning and model registry systems. Apply security, performance, and compliance best practices across ML systems. Contribute to documentation, knowledge sharing, and continuous improvement of our MLOps capabilities. Required Skills & Qualifications 4+ years of experience in Software Engineering or MLOps, preferably in a production environment. Proven experience with AWS services, especially AWS Sagemaker for model development and deployment. Working knowledge of AWS DataZone (preferred). Strong programming skills in Python, with exposure to R, Scala, or Apache Spark. Experience with ML model lifecycle management, version control, containerization (Docker), and orchestration tools (e.g., Kubernetes). Familiarity with MLflow, Airflow, or similar pipeline/orchestration tools. Experience integrating ML systems into CI/CD workflows using tools like Jenkins, GitHub Actions, or AWS CodePipeline. Solid understanding of DevOps and cloud-native infrastructure practices. Excellent problem-solving skills and the ability to work collaboratively across teams. (ref:hirist.tech)
Posted 1 month ago
5 - 10 years
10 - 20 Lacs
Pune
Hybrid
Experienced in AI Ops Engineer role focuses on deploying, monitoring, and scaling AI/GenAI models using MLOps, CI/CD, cloud (AWS/Azure/GCP), Python, Kubernetes, MLflow, security, and automation.
Posted 1 month ago
8 - 12 years
25 - 30 Lacs
Hyderabad
Work from Office
Roles and Responsibilities Design, develop, and deploy advanced AI models with a focus on generative AI, including transformer architectures (e.g., GPT, BERT, T5) and other deep learning models used for text, image, or multimodal generation. Work with extensive and complex datasets, performing tasks such as cleaning, preprocessing, and transforming data to meet quality and relevance standards for generative model training. Collaborate with cross-functional teams (e.g., product, engineering, data science) to identify project objectives and create solutions using generative AI tailored to business needs. Implement, fine-tune, and scale generative AI models in production environments, ensuring robust model performance and efficient resource utilization. Develop pipelines and frameworks for efficient data ingestion, model training, evaluation, and deployment, including A/B testing and monitoring of generative models in production. Stay informed about the latest advancements in generative AI research, techniques, and tools, applying new findings to improve model performance, usability, and scalability. Documentandcommunicatetechnicalspecifications, algorithms, and project outcomes to technical and non-technical stakeholders, with an emphasis on explainability and responsible AI practices. Qualifications Required Educational Background: Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or a related field. Relevant Ph.D. or research experience in generative AI is a plus. Experience: 8-12 years of experience in machine learning, with 2+ years in designing and implementing generative AI models or working specifically with transformer-based models. Skills and Experience Required GenerativeAI: Transformer Models, GANs, VAEs, Text Generation, Image Generation Machine Learning: Algorithms, Deep Learning, Neural Networks Programming: Python, SQL; familiarity with libraries such as Hugging Face Transformers, PyTorch, Tensor Flow MLOps: Docker, Kubernetes, MLflow, Cloud Platforms (AWS, GCP, Azure) Data Engineering: Data Preprocessing, Feature Engineering, Data Cleaning
Posted 1 month ago
7 - 11 years
50 - 60 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
Role :- Resident Solution ArchitectLocation: RemoteThe Solution Architect at Koantek builds secure, highly scalable big data solutions to achieve tangible, data-driven outcomes all the while keeping simplicity and operational effectiveness in mind This role collaborates with teammates, product teams, and cross-functional project teams to lead the adoption and integration of the Databricks Lakehouse Platform into the enterprise ecosystem and AWS/Azure/GCP architecture This role is responsible for implementing securely architected big data solutions that are operationally reliable, performant, and deliver on strategic initiatives Specific requirements for the role include: Expert-level knowledge of data frameworks, data lakes and open-source projects such as Apache Spark, MLflow, and Delta Lake Expert-level hands-on coding experience in Python, SQL ,Spark/Scala,Python or Pyspark In depth understanding of Spark Architecture including Spark Core, Spark SQL, Data Frames, Spark Streaming, RDD caching, Spark MLib IoT/event-driven/microservices in the cloud- Experience with private and public cloud architectures, pros/cons, and migration considerations Extensive hands-on experience implementing data migration and data processing using AWS/Azure/GCP services Extensive hands-on experience with the Technology stack available in the industry for data management, data ingestion, capture, processing, and curation: Kafka, StreamSets, Attunity, GoldenGate, Map Reduce, Hadoop, Hive, Hbase, Cassandra, Spark, Flume, Hive, Impala, etc Experience using Azure DevOps and CI/CD as well as Agile tools and processes including Git, Jenkins, Jira, and Confluence Experience in creating tables, partitioning, bucketing, loading and aggregating data using Spark SQL/Scala Able to build ingestion to ADLS and enable BI layer for Analytics with strong understanding of Data Modeling and defining conceptual logical and physical data models Proficient level experience with architecture design, build and optimization of big data collection, ingestion, storage, processing, and visualization Responsibilities : Work closely with team members to lead and drive enterprise solutions, advising on key decision points on trade-offs, best practices, and risk mitigationGuide customers in transforming big data projects,including development and deployment of big data and AI applications Promote, emphasize, and leverage big data solutions to deploy performant systems that appropriately auto-scale, are highly available, fault-tolerant, self-monitoring, and serviceable Use a defense-in-depth approach in designing data solutions and AWS/Azure/GCP infrastructure Assist and advise data engineers in the preparation and delivery of raw data for prescriptive and predictive modeling Aid developers to identify, design, and implement process improvements with automation tools to optimizing data delivery Implement processes and systems to monitor data quality and security, ensuring production data is accurate and available for key stakeholders and the business processes that depend on it Employ change management best practices to ensure that data remains readily accessible to the business Implement reusable design templates and solutions to integrate, automate, and orchestrate cloud operational needs and experience with MDM using data governance solutions Qualifications : Overall experience of 12+ years in the IT field Hands-on experience designing and implementing multi-tenant solutions using Azure Databricks for data governance, data pipelines for near real-time data warehouse, and machine learning solutions Design and development experience with scalable and cost-effective Microsoft Azure/AWS/GCP data architecture and related solutions Experience in a software development, data engineering, or data analytics field using Python, Scala, Spark, Java, or equivalent technologies Bachelors or Masters degree in Big Data, Computer Science, Engineering, Mathematics, or similar area of study or equivalent work experience Good to have- - Advanced technical certifications: Azure Solutions Architect Expert, - AWS Certified Data Analytics, DASCA Big Data Engineering and Analytics - AWS Certified Cloud Practitioner, Solutions Architect - Professional Google Cloud Certified Location : - Mumbai, Delhi / NCR, Bengaluru , Kolkata, Chennai, Hyderabad, Ahmedabad, Pune, Remote
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka
Work from Office
KEY ACCOUNTABILITIES Collaborate with cross-functional teams (e.g., data scientists, software engineers, product managers) to define ML problems and objectives. Research, design, and implement machine learning algorithms and models (e.g., supervised, unsupervised, deep learning, reinforcement learning). Analyse and preprocess large-scale datasets for training and evaluation. Train, test, and optimize ML models for accuracy, scalability, and performance. Deploy ML models in production using cloud platforms and/or MLOps best practices. Monitor and evaluate model performance over time, ensuring reliability and robustness. Document findings, methodologies, and results to share insights with stakeholders. QUALIFICATIONS, EXPERIENCE AND SKILLS Bachelor’s or Master’s degree in Computer Science, Data Science, Statistics, Mathematics, or a related field (graduation within the last 12 months or upcoming). Proficiency in Python or a similar language, with experience in frameworks like TensorFlow, PyTorch, or Scikit-learn. Strong foundation in linear algebra, probability, statistics, and optimization techniques. Familiarity with machine learning algorithms (e.g., decision trees, SVMs, neural networks) and concepts like feature engineering, overfitting, and regularization. Hands-on experience working with structured and unstructured data using tools like Pandas, SQL, or Spark. Ability to think critically and apply your knowledge to solve complex ML problems. Strong communication and collaboration skills to work effectively in diverse teams. Additional Skills (Good to have) Experience with cloud platforms (e.g., AWS, Azure, GCP) and MLOps tools (e.g., MLflow, Kubeflow). Knowledge of distributed computing or big data technologies (e.g., Hadoop, Apache Spark). Previous internships, academic research, or projects showcasing your ML skills. Familiarity with deployment frameworks like Docker and Kubernetes. #LI-AA6
Posted 1 month ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Position: Account Executive About MESH WorksOur mission is to connect buyers & suppliers around the world. MESH is B2B SaaS product helping sourcing, procurement & supply chain teams to find global suppliers faster, reduce sourcing costs & streamline supplier discovery. Our audited database of contract manufacturers includes suppliers from 40+ countries & 35+ industries. We're looking for AI/ML Engineer to join our AI team, which works closely with product & tech teams on bringing AI to real-world problems & use cases for our customers. You can find more about us at www.meshworks.com Key ResponsibilitiesDevelop & fine-tune AI models for procurement, engineering and supplier intelligenceDesign multi-agent AI systems that interact & automate decision-makingIntegrate LLMs, retrieval-augmented generateion (RAG), and vector databases into our platformWork with Azure AI services to build scalable and secure AI solutionsOptimize AI model performance for accuracy, efficiency, and low latencyCollaborate with software engineers to deploy models into productionMonitor and improve AI performance using MLOps and feedback loops Requirements2+ years of experience in AI/ML engineering (startup or SaaS experience is a plus)Deep Expertise in:Large Language Models (LLMs) and Natural language ProcessingDeep learning frameworks (PyTorch, TensorFlow)Vector databases (Pinecone, Weaviate, FAISS)Python and AI libraries (Hugging Face, LangChain)Strong Understanding of Azure & OpenAI API IntegrationMulti-agent systems & AI orchestrationMLOps tools (MLFlow, Kubeflow, Airflow)Proven ability to work with both structured & unstructured dataExperience with cloud-based AI deployment
Posted 1 month ago
8 years
0 Lacs
Pune, Maharashtra
Work from Office
Basic Information Country: India State: Maharashtra City: Pune Date Published: 05-May-2025 Job ID: 44647 Travel: You may occasionally be required to travel for business Secondary locations: IND Bangalore - Prestige Summit Looking for details about our benefits? You can learn more about them by clicking HERE Description and Requirements CareerArc Code CA-PS #LI-PS1 Hybrid: #LI-Hybrid "At BMC trust is not just a word - it's a way of life!" We are an award-winning, equal opportunity, culturally diverse, fun place to be. Giving back to the community drives us to be better every single day. Our work environment allows you to balance your priorities, because we know you will bring your best every day. We will champion your wins and shout them from the rooftops. Your peers will inspire, drive, support you, and make you laugh out loud! We help our customers free up time and space to become an Autonomous Digital Enterprise that conquers the opportunities ahead - and are relentless in the pursuit of innovation! The DSOM product line includes BMC’s industry-leading Digital Services and Operation Management products. We have many interesting SaaS products, in the fields of: Predictive IT service management, Automatic discovery of inventories, intelligent operations management, and more! We continuously grow by adding and implementing the most cutting-edge technologies and investing in Innovation! Our team is a global and versatile group of professionals, and we LOVE to hear our employees’ innovative ideas. So, if Innovation is close to your heart – this is the place for you! BMC is looking for an experienced Data Science Engineer with hands-on experience with Classical ML, Deep Learning Networks and Large Language Models, knowledge to join us and design, develop, and implement microservice based edge applications, using the latest technologies. In this role, you will be responsible for End-to-end design and execution of BMC Data Science tasks, while acting as a focal point and expert for our data science activities. You will research and interpret business needs, develop predictive models, and deploy completed solutions. You will provide expertise and recommendations for plans, programs, advance analysis, strategies, and policies. Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Ideate, design, implement and maintain enterprise business software platform for edge and cloud, with a focus on Machine Learning and Generative AI Capabilities, using mainly Python Work with a globally distributed development team to perform requirements analysis, write design documents, design, develop and test software development projects. Understand real world deployment and usage scenarios from customers and product managers and translate them to AI/ML features that drive value of the product. Work closely with product managers and architects to understand requirements, present options, and design solutions. Work closely with customers and partners to analyze time-series data and suggest the right approaches to drive adoption. Analyze and clearly communicate both verbally and in written form the status of projects or issues along with risks and options to the stakeholders. To ensure you’re set up for success, you will bring the following skillset & experience: You have 8+ years of hands-on experience in data science or machine learning roles. You have experience working with sensor data, time-series analysis, predictive maintenance, anomaly detection, or similar IoT-specific domains. You have strong understanding of the entire ML lifecycle: data collection, preprocessing, model training, deployment, monitoring, and continuous improvement. You have proven experience designing and deploying AI/ML models in real-world IoT or edge computing environments. You have strong knowledge of machine learning frameworks (e.g., scikit-learn, TensorFlow, PyTorch, XGBoost). Whilst these are nice to have, our team can help you develop in the following skills: Experience with digital twins, real-time analytics, or streaming data systems. Contribution to open-source ML/AI/IoT projects or relevant publications. Experience with Agile development methodology and best practice in unit testin Experience with Kubernetes (kubectl, helm) will be an advantage. Experience with cloud platforms (AWS, Azure, GCP) and tools for ML deployment (SageMaker, Vertex AI, MLflow, etc.). Our commitment to you! BMC’s culture is built around its people. We have 6000+ brilliant minds working together across the globe. You won’t be known just by your employee number, but for your true authentic self. BMC lets you be YOU! If after reading the above, You’re unsure if you meet the qualifications of this role but are deeply excited about BMC and this team, we still encourage you to apply! We want to attract talents from diverse backgrounds and experience to ensure we face the world together with the best ideas! BMC is committed to equal opportunity employment regardless of race, age, sex, creed, color, religion, citizenship status, sexual orientation, gender, gender expression, gender identity, national origin, disability, marital status, pregnancy, disabled veteran or status as a protected veteran. If you need a reasonable accommodation for any part of the application and hiring process, visit the accommodation request page.
Posted 1 month ago
0.0 - 3.0 years
0 Lacs
Gwalior, Madhya Pradesh
On-site
Job Title: Full Stack AI Engineer (3 Years Experience) Location: Gwalior Job Type: Full-time / Permanent Department: Software Development MODE: Work from Office (Mandatory) SALARY: 2 LPA - 5.5 LPA About Us: Synram Software Services Pvt. Ltd., a subsidiary of the renowned FG International GmbH, Germany, is a premier IT solutions provider specializing in ERP systems, E-commerce platforms, Mobile Applications, and Digital Marketing. We are committed to delivering tailored solutions that drive success across various industries. Job Description: We are seeking a highly skilled Full Stack AI Software Engineer with 3 years of experience to join our innovative team. The ideal candidate will have a strong foundation in full stack web development as well as hands-on experience in designing, developing, and deploying AI/ML models. You will play a critical role in building intelligent applications, integrating AI capabilities into end-to-end software solutions, and delivering scalable systems. Key Responsibilities: Design and develop full stack applications using modern frameworks (e.g., MERN/MEAN/Python-Django + React). Build and deploy machine learning models into production environments. Integrate AI capabilities (e.g., NLP, computer vision, recommendation systems) into web or mobile applications. Collaborate with data scientists, product managers, and UI/UX teams to design seamless AI-powered user experiences. Manage and optimize backend infrastructure (e.g., REST APIs, GraphQL, databases, cloud services). Write reusable, testable, and efficient code in both backend and frontend environments. Maintain model pipelines, monitoring performance, and retraining as needed. Leverage cloud platforms (AWS, GCP, Azure) for scalable AI and application deployment. Implement CI/CD pipelines for continuous integration and delivery. Required Skills & Qualifications: Bachelor’s or Master’s in Computer Science, AI/ML, or related field. 3+ years of experience in full stack development (Node.js, React, Angular, or similar). Strong understanding of machine learning fundamentals and frameworks (TensorFlow, PyTorch, Scikit-learn). Experience with LLMs (e.g. Mistral, LLaMA, or comparable open-source models) LangChain or CrewAI or similar frameworks for agent orchestration JSON modeling of structured data (job requirements, profiles, etc.) REST API design & integration MongoDB or PostgreSQL Understanding of data protection (GDPR) in AI-driven applications Proficient in Python and JavaScript/TypeScript. Familiarity with cloud platforms (AWS, GCP, Azure). Experience with databases (SQL & NoSQL), Docker, and Git. Excellent problem-solving and communication skills. Nice-to-Have ( Extra Weightage) Experience with vector databases (e.g. Qdrant,Weaviate ) Prompt Engineering & Few-Shot Learning strategies, Mistall AI model Experience deploying LLMs on AWS GPU instances (EC2) Basic understanding of Angular-to-API communication Speech-to-Text (STT) and Text-to-Speech (TTS) processing tools (e.g., OpenAI Whisper, Coqui TTS)Please prioritize candidates who have previously integrated AI models into operational platforms, especially in the HRTech or ERPTech environment. Preferred Qualifications: Experience with LLMs, Generative AI, or AI agents. Exposure to MLOps tools like MLflow, SageMaker, or Kubeflow. Contributions to AI open-source projects or research publications. Understanding of data engineering pipelines and ETL processes. Why Join Us: Work on cutting-edge AI solutions with real-world impact. Flexible work environment and supportive team culture. Opportunities for career growth in a fast-paced, AI-first company. If you're ready to take on new challenges and join a team that values innovation and creativity, apply now! Mail us: career@synram.co or Call us on +91-9111381555 Last Date: 10/05/2025 Job Types: Full-time, Permanent Pay: ₹20,747.74 - ₹32,000.70 per month Benefits: Flexible schedule Health insurance Schedule: Fixed shift Morning shift Weekend availability Supplemental Pay: Overtime pay Performance bonus Yearly bonus Ability to commute/relocate: Gwalior, Madhya Pradesh: Reliably commute or planning to relocate before starting work (Preferred) Language: English (Preferred) Work Location: In person Application Deadline: 10/05/2025 Expected Start Date: 02/05/2025
Posted 2 months ago
0.0 - 12.0 years
0 Lacs
Hyderabad, Telangana
On-site
Lead, Software Engineering Hyderabad, India Information Technology 313260 Job Description About The Role: Grade Level (for internal use): 11 The Team: Our team is responsible for the design, architecture, and development of our client facing applications using a variety of tools that are regularly updated as new technologies emerge. You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. The Impact: The work you do will be used every single day, it’s the essential code you’ll write that provides the data and analytics required for crucial, daily decisions in the capital and commodities markets. What’s in it for you: Build a career with a global company. Work on code that fuels the global financial markets. Grow and improve your skills by working on enterprise level products and new technologies. Responsibilities: Solve problems, analyze and isolate issues. Provide technical guidance and mentoring to the team and help them adopt change as new processes are introduced. Champion best practices and serve as a subject matter authority. Develop solutions to develop/support key business needs. Engineer components and common services based on standard development models, languages and tools Produce system design documents and lead technical walkthroughs Produce high quality code Collaborate effectively with technical and non-technical partners As a team-member should continuously improve the architecture Basic Qualifications: 9-12 years of experience designing/building data-intensive solutions using distributed computing. Proven experience in implementing and maintaining enterprise search solutions in large-scale environments. Experience working with business stakeholders and users, providing research direction and solution design and writing robust maintainable architectures and APIs. Experience developing and deploying Search solutions in a public cloud such as AWS. Proficient programming skills at a high-level languages - Java, Scala, Python Solid knowledge of at least one machine learning research frameworks Familiarity with containerization, scripting, cloud platforms, and CI/CD. 5+ years’ experience with Python, Java, Kubernetes, and data and workflow orchestration tools 4+ years’ experience with Elasticsearch, SQL, NoSQL, Apache spark, Flink, Databricks and Mlflow. Prior experience with operationalizing data-driven pipelines for large scale batch and stream processing analytics solutions Good to have experience with contributing to GitHub and open source initiatives or in research projects and/or participation in Kaggle competitions Ability to quickly, efficiently, and effectively define and prototype solutions with continual iteration within aggressive product deadlines. Demonstrate strong communication and documentation skills for both technical and non-technical audiences. Preferred Qualifications: Search Technologies: Query and Indexing content for Apache Solr, Elastic Search, etc. Proficiency in search query languages (e.g., Lucene Query Syntax) and experience with data indexing and retrieval. Experience with machine learning models and NLP techniques for search relevance and ranking. Familiarity with vector search techniques and embedding models (e.g., BERT, Word2Vec). Experience with relevance tuning using A/B testing frameworks. Big Data Technologies: Apache Spark, Spark SQL, Hadoop, Hive, Airflow Data Science Search Technologies: Personalization and Recommendation models, Learn to Rank (LTR) Preferred Languages: Python, Java Database Technologies: MS SQL Server platform, stored procedure programming experience using Transact SQL. Ability to lead, train and mentor. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Inclusive Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering an inclusive workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and equal opportunity, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 313260 Posted On: 2025-04-28 Location: Hyderabad, Telangana, India
Posted 2 months ago
0.0 - 12.0 years
0 Lacs
Hyderabad, Telangana
On-site
Lead, Software Engineering Hyderabad, India Information Technology 313257 Job Description About The Role: Grade Level (for internal use): 11 The Team: Our team is responsible for the design, architecture, and development of our client facing applications using a variety of tools that are regularly updated as new technologies emerge. You will have the opportunity every day to work with people from a wide variety of backgrounds and will be able to develop a close team dynamic with coworkers from around the globe. The Impact: The work you do will be used every single day, it’s the essential code you’ll write that provides the data and analytics required for crucial, daily decisions in the capital and commodities markets. What’s in it for you: Build a career with a global company. Work on code that fuels the global financial markets. Grow and improve your skills by working on enterprise level products and new technologies. Responsibilities: Solve problems, analyze and isolate issues. Provide technical guidance and mentoring to the team and help them adopt change as new processes are introduced. Champion best practices and serve as a subject matter authority. Develop solutions to develop/support key business needs. Engineer components and common services based on standard development models, languages and tools Produce system design documents and lead technical walkthroughs Produce high quality code Collaborate effectively with technical and non-technical partners As a team-member should continuously improve the architecture Basic Qualifications: 10-12 years of experience designing/building data-intensive solutions using distributed computing. Proven experience in implementing and maintaining enterprise search solutions in large-scale environments. Experience working with business stakeholders and users, providing research direction and solution design and writing robust maintainable architectures and APIs. Experience developing and deploying Search solutions in a public cloud such as AWS. Proficient programming skills at a high-level languages -Java, Scala, Python Solid knowledge of at least one machine learning research frameworks Familiarity with containerization, scripting, cloud platforms, and CI/CD. 5+ years’ experience with Python, Java, Kubernetes, and data and workflow orchestration tools 4+ years’ experience with Elasticsearch, SQL, NoSQL, Apache spark, Flink, Databricks and Mlflow. Prior experience with operationalizing data-driven pipelines for large scale batch and stream processing analytics solutions Good to have experience with contributing to GitHub and open source initiatives or in research projects and/or participation in Kaggle competitions Ability to quickly, efficiently, and effectively define and prototype solutions with continual iteration within aggressive product deadlines. Demonstrate strong communication and documentation skills for both technical and non-technical audiences. Preferred Qualifications: Search Technologies: Query and Indexing content for Apache Solr, Elastic Search, etc. Proficiency in search query languages (e.g., Lucene Query Syntax) and experience with data indexing and retrieval. Experience with machine learning models and NLP techniques for search relevance and ranking. Familiarity with vector search techniques and embedding models (e.g., BERT, Word2Vec). Experience with relevance tuning using A/B testing frameworks. Big Data Technologies: Apache Spark, Spark SQL, Hadoop, Hive, Airflow Data Science Search Technologies: Personalization and Recommendation models, Learn to Rank (LTR) Preferred Languages: Python, Java Database Technologies: MS SQL Server platform, stored procedure programming experience using Transact SQL. Ability to lead, train and mentor. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, and make decisions with conviction. For more information, visit www.spglobal.com/marketintelligence. What’s In It For You? Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technology–the right combination can unlock possibility and change the world. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence®, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: We're more than 35,000 strong worldwide—so we're able to understand nuances while having a broad perspective. Our team is driven by curiosity and a shared belief that Essential Intelligence can help build a more prosperous future for us all. From finding new ways to measure sustainability to analyzing energy transition across the supply chain to building workflow solutions that make it easy to tap into insight and apply it. We are changing the way people see things and empowering them to make an impact on the world we live in. We’re committed to a more equitable future and to helping our customers find new, sustainable ways of doing business. We’re constantly seeking new solutions that have progress in mind. Join us and help create the critical insights that truly make a difference. Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you can take care of business. We care about our people. That’s why we provide everything you—and your career—need to thrive at S&P Global. Our benefits include: Health & Wellness: Health care coverage designed for the mind and body. Flexible Downtime: Generous time off helps keep you energized for your time on. Continuous Learning: Access a wealth of resources to grow your career and learn valuable new skills. Invest in Your Future: Secure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly Perks: It’s not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the Basics: From retail discounts to referral incentive awards—small perks can make a big difference. For more information on benefits by country visit: https://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected and engaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert: If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, “pre-employment training” or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. - Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to: EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only: The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf - 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority – Ratings - (Strategic Workforce Planning) Job ID: 313257 Posted On: 2025-06-17 Location: Hyderabad, Telangana, India
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
23751 Jobs | Dublin
Wipro
12469 Jobs | Bengaluru
EY
8625 Jobs | London
Accenture in India
7339 Jobs | Dublin 2
Uplers
7127 Jobs | Ahmedabad
Amazon
6778 Jobs | Seattle,WA
IBM
6514 Jobs | Armonk
Oracle
6388 Jobs | Redwood City
Muthoot FinCorp (MFL)
5532 Jobs | New Delhi
Capgemini
4741 Jobs | Paris,France