Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0.0 - 3.0 years
4 - 8 Lacs
Surat
Work from Office
Gatisofttech is offering a golden opportunity for postgraduate students specializing in Artificial Intelligence and Machine Learning to gain hands-on industry experience through a well-structured internship program. As an intern, you will collaborate with our core development and innovation team to work on real-world projects involving AI, ML, and data-driven solutions. Key Responsibilities Work on machine learning model development , testing, and evaluation. Assist in data collection, preprocessing , and feature engineering . Apply statistical techniques and machine learning algorithms to solve business problems. Collaborate with software engineers to integrate AI models into web/mobile applications. Support in building automated tools , chatbots, and predictive analytics. Participate in team discussions, RD activities, and documentation of findings. Who Can Apply Postgraduate students (MSc/MTech/MCA/ME) in their final semester, pursuing specialization in AI/ML, Data Science, or Computer Science . Strong understanding of machine learning algorithms , Python , and libraries like Pandas, NumPy, scikit-learn, TensorFlow, or PyTorch . Familiarity with data visualization , statistics , and model evaluation metrics . Knowledge of SQL/MySQL and basic web technologies is a plus. Good communication skills and a willingness to learn and explore real-world AI applications. What You'll Gain Opportunity to work on industry-grade projects and AI solutions. Exposure to real-time challenges , datasets, and deployment strategies. Mentorship from experienced developers and project leads. Certificate of completion and possible job offer based on performance. A collaborative and supportive work environment to grow your skills. Why Gatisofttech Work with an innovation-driven team . Friendly work culture and career-building exposure . Festival celebrations, outings , and other activities. Convenient work location in Nanpura, Surat .
Posted 1 month ago
1.0 - 6.0 years
15 - 20 Lacs
Hyderabad
Work from Office
Job Overview Are you a knowledgeable ML/AI professional or instructor with a creative flair and a passion for content design? Join us to create impactful, learner-centered content that blends real-world expertise with engaging storytelling. As a Content Contributor, you will work with the curriculum team to create engaging, instructionally sound learning experiences in the Machine Learning and AI domain. Working with subject matter experts (SMEs), you’ll translate complex frameworks into clear, outcome-focused content across digital formats. This role demands instructional design expertise, a deep understanding of learner needs, and the ability to creatively script and plan high-impact learning assets—from video courses to assessments. Job Responsibilities Design creative and effective learning experiences grounded in instructional design principles, addressing diverse learner personas and real-world scenarios. Author and script engaging digital content, including on-demand videos, interactive walkthroughs/lessons, assessments, and job aids. Collaborate with visual designers, editors, and technical experts to bring content to life in a compelling and accessible format. Utilize Generative AI tools to accelerate and enhance content ideation, scripting, and personalization. Ensure instructional consistency, voice, and quality across all course deliverables and formats. Skills Required Excellent scripting, writing, and communication skills; able to distil complex concepts into concise, engaging narratives. Strong creativity and storytelling ability with an understanding of how to structure content for different learning styles. Fluency with and experience in programming such as Python and SQL. Fluency and experience with AI/ML libraries such as NumPy, Pandas, Sci-kit Learn, HuggingFace, and Langchain. Experience working with AI/ML technology and topics such as Agents, LLMs, OpenAI, Claude, Gemini, Copilot, and Deep Learning. Preferred/Additional Skills: Relevant certifications in AI. Familiarity with Generative AI tools like ChatGPT, Claude, or similar for content creation and enhancement. Understanding of instructional design models such as ADDIE, SAM, or Bloom’s Taxonomy.
Posted 1 month ago
2.0 - 6.0 years
4 - 8 Lacs
Mumbai, Bengaluru, Delhi / NCR
Work from Office
Expected Notice Period : 15 Days Shift : (GMT+01:00) Europe/London (BST) Opportunity Type : Remote Placement Type: Full Time Contract for 6 Months(40 hrs a week/160 hrs a month) (*Note: This is a requirement for one of Uplers' client - UK's Leading AgriTech Company) What do you need for this opportunity? Must have skills required: AgriTech Industry, Large Language Models, Nvidia Jetson, Raspberry Pi, Blender, Computer Vision, OpenCV, Python, Pytorch/tensorflow, Segmentation, Extraction, Regression UK's Leading AgriTech Company is Looking for: Location: Remote Type: 6 months contract Experience Level : 35 Years Industry: Agritech | Sustainability | AI for Renewables About Us We're an AI-first company transforming the renewable and sustainable agriculture space. Our mission is to harness advanced computer vision and machine learning to enable smart, data-driven decisions in the livestock and agricultural ecosystem. We focus on practical applications such as automated weight estimation of cattle , livestock monitoring, and resource optimization to drive a more sustainable food system. Role Overview We are hiring a Computer Vision Engineer to develop intelligent image-based systems for livestock management, focusing on cattle weight estimation from images and video feeds. You will be responsible for building scalable vision pipelines, working with deep learning models, and bringing AI to production in real-world farm settings . Key Responsibilities Design and develop vision-based models to predict cattle weight from 2D/3D images, video, or depth data. Build image acquisition and preprocessing pipelines using multi-angle camera data. Implement classical and deep learning-based feature extraction techniques (e.g., body measurements, volume estimation). Conduct camera calibration, multi-view geometry analysis, and photogrammetry for size inference. Apply deep learning architectures (e.g., CNNs, ResNet, UNet, Mask R-CNN) for object detection, segmentation, and keypoint localization. Build 3D reconstruction pipelines using stereo imaging, depth sensors, or photogrammetry. Optimize and deploy models for edge devices (e.g., NVIDIA Jetson) or cloud environments. Collaborate with data scientists and product teams to analyze livestock datasets, refine prediction models, and validate outputs. Develop tools for automated annotation, model training pipelines, and continuous performance tracking. Required Qualifications & Skills Computer Vision: Object detection, keypoint estimation, semantic/instance segmentation, stereo imaging, and structure-from-motion. Weight Estimation Techniques: Experience in livestock monitoring, body condition scoring, and volumetric analysis from images/videos. Image Processing: Noise reduction, image normalization, contour extraction, 3D reconstruction, and camera calibration. Data Analysis & Modeling: Statistical modeling, regression techniques, and feature engineering for biological data. Technical Stack Programming Languages: Python (mandatory) Libraries & Frameworks: OpenCV, PyTorch, TensorFlow, Keras, scikit-learn 3D Processing: Open3D, PCL (Point Cloud Library), Blender (optional) Data Handling: NumPy, Pandas, DVC Annotation Tools: LabelImg, CVAT, Roboflow Cloud & DevOps: AWS/GCP, Docker, Git, CI/CD pipelines Deployment Tools: ONNX, TensorRT, FastAPI, Flask (for model serving) Preferred Qualifications Prior experience working in agritech, animal husbandry, or precision livestock farming. Familiarity with Large Language Models (LLMs) and integrating vision + language models for domain-specific insights. Knowledge of edge computing for on-farm device deployment (e.g., NVIDIA Jetson, Raspberry Pi). Contributions to open-source computer vision projects or relevant publications in CVPR, ECCV, or similar conferences. Soft Skills Strong problem-solving and critical thinking skills Clear communication and documentation practices Ability to work independently and collaborate in a remote, cross-functional team Why Join Us? Work at the intersection of AI and sustainability Be part of a dynamic and mission-driven team Opportunity to lead innovation in an emerging field of agritech Flexible remote work environment
Posted 1 month ago
5.0 - 10.0 years
15 - 25 Lacs
Hyderabad
Work from Office
Job Description: Experienced Python Senior Developer to create dynamic software applications for our clients. To be successful in this role, the resource is expected to have good knowledge in python programming, object mapping and implementation of data intensive server-side logic. Responsibilities: Coordinating with business stakeholders to determine application requirements Maintaining and updating existing applications, troubleshoot and debug complex applications Desing and implement effective, scalable, clean code Develop server-side backend components Adhere to engineering best practices in development and ownership of undertaken tasks and features Qualifications & Desired Skills: Professional software development experience as python developer Expert knowledge in object-oriented concepts, working knowledge of SDLC best practices such as Agile, Git version control, and continuous integration Proficient in NumPy, Pandas, SQL. Exceptional in SQL with Oracle RDBMS experience is essential Working knowledge in AWS cloud, experience in Java is a nice to have Bachelors degree in engineering.
Posted 1 month ago
2.0 - 7.0 years
4 - 8 Lacs
Vadodara
Work from Office
The Opportunity: We are seeking a skilled Python Full Stack Developer with hands-on experience in CCTV monitoring, surveillance tools, and face recognition systems. The ideal candidate will have experience integrating Python-based machine learning libraries into frontend systems and is comfortable working with Fast-API for backend interaction. Required Skills & Experience: Accountability As a part of our dynamic development team, you will contribute to designing and developing cutting-edge applications that deliver seamless user experiences, while also leveraging cloud infrastructure and machine learning capabilities. Develop and maintain frontend components for CCTV monitoring and surveillance dashboards. Integrate face recognition features using Python-based libraries into frontend views. Collaborate with backend engineers working on Fast-API to build seamless, responsive interfaces. Visualizing real-time video feeds, alert systems, and facial recognition results effectively for users. Optimize frontend performance for real-time surveillance applications. Work closely with internal teams to understand business requirements and develop / maintain and enhance applications as per the business needs. Establish data driven culture across the organization. Scope Work closely with cross-functional teams, including backend developers, data scientists, product managers, and designers, to deliver high-quality products. Participate in code reviews, debugging, and performance optimization. Contributes to the continuous improvement of the development of lifecycle, implementing best practices for both frontend development and cloud-based systems. Stay updated with the latest trends in machine learning and cloud technologies. Outcomes Design and develop responsive and interactive web and mobile applications (iOS and Android). Ensure high performance, cross-browser compatibility, and consistent user experience. Write clean, maintainable, and efficient code using frontend technologies (HTML5, CSS3, JavaScript, TypeScript, React, React Native, Angular, or Vue.js). Collaborate closely with UI/UX designers to create visually appealing and user-friendly interfaces. Optimize applications for maximum speed and scalability across multiple platforms. Qualifications: Education : Bachelors degree in computer science, or a related field. Experience: Working with CCTV monitoring and surveillance platforms Skills & Competencies. Familiarity with face recognition libraries (e.g., face recognition, OpenCV, Cmake, Dlib) and how to interface them with frontend applications. Experience working with Fast-API or similar Python web frameworks. Experience with video streaming protocols (e.g., RTSP, WebRTC). Skills & Competencies: Strong knowledge of frontend frameworks such as React.js, Vue.js, or Angular Background in security or smart surveillance systems. Work with Python-based machine learning libraries (e.g., TensorFlow, Keras, PyTorch, Scikit-learn, Pandas, NumPy, dlib, face recognition) to build and integrate machine learning models into applications. UI/UX design sensibility for control panels and dashboards. Work closely with cross-functional teams, including backend developers, data scientists, product managers, and designers, to deliver high-quality products. Personal Attributes: An analytical mindset with the ability to see the big picture while diving deep into the details. Strong leadership and people management skills with a passion for mentoring. Results-driven with a focus on measurable outcomes and long-term growth. Curiosity and a commitment to staying up to date with industry trends and data analytics best practices.
Posted 1 month ago
4.0 - 9.0 years
15 - 25 Lacs
Hyderabad
Work from Office
python, experienced in performing ETL andData Engineering concepts (PySpark, NumPy, Pandas, AWS Glue and Airflow)SQL exclusively Oracle hands-on work experience SQL profilers / Query Analyzers AWS cloud related (S3, RDS, RedShift) ETLPython
Posted 1 month ago
3.0 - 5.0 years
4 - 8 Lacs
Chennai
Work from Office
Job Title: QA Engineer About Straive: Straive is a trusted leader in building and operationalizing enterprise data and AI solutions for top global brands and corporations. Our key differentiators include extensive domain expertise across multiple industry verticals, coupled with cutting-edge data analytics and AI capabilities that drive measurable ROI for our clients. With a global presence spanning 30+ countries and a team of 18,000+ professionals, we integrate industry knowledge with scalable technology solutions and AI accelerators to deliver data-driven solutions to the world's leading organizations. Our headquarters are based in Singapore, with offices across the U.S., UK, Canada, India, the Philippines, Vietnam, and Nicaragua. Website: https://www.straive.com/ Linkedin Objective: We are looking for a proactive and detail-oriented QA Engineer to join our team for a data-intensive backend automation project. This role is centered around backend data quality checks, ensuring accuracy between document inputs and system-generated outputs using automation and AI technologies. Education & Experience: Bachelor's degree in Computer Science, Information Technology or a related field 3 to 4 Years Total IT Experience (Minimum 2 Years in Automation Testing) Knowledge, Skills and Abilities: Strong experience in Python, with a focus on data handling using Pandas. Familiarity with Selenium (basic UI automation) Hands-on with RegEx for text extraction and validation. Exposure to LLM (Large Language Models) like GPT, prompt engineering for data extraction tasks. Expert in QA Automation with a strong focus on integration testing and end-to-end validation. Proficient in API testing using Postman. Skilled in writing SQL queries for SQL Server (SSMS) and MySQL. Working knowledge of Maven, Jenkins and AWS in a CI/CD environment. Experienced in using JIRA for test case management and defect tracking. Strong functional testing skills including test planning, execution and regression testing. Proficient in using GitHub for version control and team collaboration. Excellent communication skills and ability to work with cross-functional teams across time zones(India & USA). Well-versed in STLC, BLC, and Agile/Scrum methodologies. Proactive in reporting status, raising issues and working in dynamic environments.
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Hyderabad, Bengaluru
Work from Office
Job Summary Synechron is seeking an experienced Big Data Developer with strong expertise in Spark, Scala, and Python to lead and contribute to large-scale data projects. The role involves designing, developing, and implementing robust data solutions that leverage emerging technologies to enhance business insights and operational efficiency. The successful candidate will play a key role in driving innovation, mentoring team members, and ensuring the delivery of high-quality data products aligned with organizational objectives. Software Requirements Required: Apache Spark (latest stable version) Scala (version 2.12 or higher) Python (version 3.6 or higher) Big Data tools and frameworks supporting Spark and Scala Preferred: Cloud platforms such as AWS, Azure, or GCP for data deployment Data processing or orchestration tools like Kafka, Hadoop, or Airflow Data visualization tools for data insights Overall Responsibilities Lead the development and implementation of data pipelines and solutions using Spark, Scala, and Python Collaborate with business and technology teams to understand data requirements and translate them into scalable solutions Mentor and guide junior team members on best practices in big data development Evaluate and recommend new technologies and tools to improve data processing and quality Stay informed about industry trends and emerging technologies relevant to big data and analytics Ensure timely delivery of data projects with high standards of quality, performance, and security Lead technical reviews, code reviews, and provide inputs to improve overall development standards and practices Contribute to architecture design discussions and assist in establishing data governance standards Technical Skills (By Category) Programming Languages: Essential: Spark (Scala), Python Preferred: Knowledge of Java or other JVM languages Data Management & Databases: Experience with distributed data storage solutions (HDFS, S3, etc.) Familiarity with NoSQL databases (e.g., Cassandra, HBase) and relational databases for data integration Cloud Technologies: Preferred: Cloud platforms (AWS, Azure, GCP) for data processing, storage, and deployment Frameworks & Libraries: Spark MLlib, Spark SQL, Spark Streaming Data processing libraries in Python (pandas, PySpark) Development Tools & Methodologies: Version control (Git, Bitbucket) Agile methodologies (Scrum, Kanban) Data pipeline orchestration tools (Apache Airflow, NiFi) Security & Compliance: Understanding of data security best practices and data privacy regulations Experience Requirements 5 to 10 years of hands-on experience in big data development and architecture Proven experience in designing and developing large-scale data pipelines using Spark, Scala, and Python Demonstrated ability to lead technical projects and mentor team members Experience working with cross-functional teams including data analysts, data scientists, and business stakeholders Track record of delivering scalable, efficient, and secure data solutions in complex environments Day-to-Day Activities Develop, test, and optimize scalable data pipelines using Spark, Scala, and Python Collaborate with data engineers, analysts, and stakeholders to gather requirements and translate into technical solutions Lead code reviews, mentor junior team members, and enforce coding standards Participate in architecture design and recommend best practices in big data development Monitor data workflows performance and troubleshoot issues to ensure data quality and reliability Stay updated with industry trends and evaluate new tools and frameworks for potential implementation Document technical designs, data flows, and implementation procedures Contribute to continuous improvement initiatives to optimize data processing workflows Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or a related field Relevant certifications in cloud platforms, big data, or programming languages are advantageous Continuous learning on innovative data technologies and frameworks Professional Competencies Strong analytical and problem-solving skills with a focus on scalable data solutions Leadership qualities with the ability to guide and mentor team members Excellent communication skills to articulate technical concepts to diverse audiences Ability to work collaboratively in cross-functional teams and fast-paced environments Adaptability to evolving technologies and industry trends Strong organizational skills for managing multiple projects and priorities
Posted 1 month ago
1.0 - 6.0 years
8 - 12 Lacs
Chennai
Hybrid
Min 1-3 yrs of exp in data science,NLP&Python Exp in PyTorch,Scikit-learn,and NLP libraries(NLTK,SpaCy,Hugging Face) Help deploy AI/ML solutions on AWS,GCP/Azure Exp in SQL for data manipulation & analysis Exp in Big Data processing Spark,Pandas,Dask
Posted 1 month ago
6.0 - 11.0 years
8 - 13 Lacs
Hyderabad, Ahmedabad, Gurugram
Work from Office
About the Role: Grade Level (for internal use): 11 The Team As a member of the EDO, Collection Platforms & AI Cognitive Engineering team you will work on building GenAI-driven and ML-powered products and capabilities to power natural language understanding, data extraction, information retrieval and data sourcing solutions for S&P Global. You will define AI strategy, mentor others, and drive production-ready AI products and pipelines while leading by example in a highly engaging work environment. You will work in a (truly) global team and be encouraged for thoughtful risk-taking and self-initiative. Whats in it for you: Be a part of a global company and build solutions at enterprise scale Lead and grow a highly skilled, hands-on technical team (including mentoring junior data scientists) Contribute to solving high-complexity, high-impact problems end-to-end Architect and oversee production-ready pipelines from ideation to deployment Responsibilities: Define AI roadmap, tooling choices, and best practices for model building, prompt engineering, fine-tuning, and vector retrieval systems Architect, develop and deploy large-scale ML and GenAI-powered products and pipelines Own all stages of the data science project lifecycle, including: Identification and scoping of high-value data science and AI opportunities Partnering with business leaders, domain experts, and end-users to gather requirements and align on success metrics Evaluation, interpretation, and communication of results to executive stakeholders Lead exploratory data analysis, proof-of-concepts, model benchmarking, and validation experiments for both ML and GenAI approaches Establish and enforce coding standards, perform code reviews, and optimize data science workflows Drive deployment, monitoring, and scaling strategies for models in production (including both ML and GenAI services) Mentor and guide junior data scientists; foster a culture of continuous learning and innovation Manage stakeholders across functions to ensure alignment and timely delivery Technical : Hands-on experience with large language models (e.g., OpenAI, Anthropic, Llama), prompt engineering, fine-tuning/customization, and embedding-based retrieval Expert proficiency in Python (NumPy, Pandas, SpaCy, scikit-learn, PyTorch/TF 2, Hugging Face Transformers) Deep understanding of ML & Deep Learning models, including architectures for NLP (e.g., transformers), GNNs, and multimodal systems Strong grasp of statistics, probability, and the mathematics underpinning modern AI Ability to surf and synthesize current AI/ML research, with a track record of applying new methods in production Proven experience on at least one end-to-end GenAI or advanced NLP projectcustom NER, table extraction via LLMs, Q&A systems, summarization pipelines, OCR integrations, or GNN solutions Familiarity with orchestration and deployment toolsDocker, Airflow, Kubernetes, Redis, Flask/Django/FastAPI, PySpark, SQL, R-Shiny/Dash/Streamlit Openness to evaluate and adopt emerging technologies and programming languages as needed Good to have: Masters or Ph.D. in Computer Science, Statistics, Mathematics, or related field (minimum Bachelors) 6+ years of relevant experience in Data Science/AI, with at least 2 years in a leadership or technical lead role Prior experience in the Economics/Financial industry, especially with market-intelligence or risk analytics products Public contributions or demos on GitHub, Kaggle, StackOverflow, technical blogs, or publications Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- 20 - Professional (EEO-2 Job Categories-United States of America), IFTECH202.2 - Middle Professional Tier II (EEO Job Group), SWP Priority Ratings - (Strategic Workforce Planning)
Posted 1 month ago
8.0 - 13.0 years
10 - 15 Lacs
Hyderabad, Ahmedabad, Gurugram
Work from Office
About the Role: Grade Level (for internal use): 12 The Team As a leader in the EDO, Collection Platforms & AI Cognitive Engineering team, you will own the vision and delivery of enterprise-scale Data Science and GenAI solutions that power natural language understanding, data extraction, information retrieval, data sourcing, and speech-to-text services for S&P Global. You will define the data science strategy, champion best practices, mentor across functions, and drive production-ready AI/ML products from ideation through deployment. Youll work in a (truly) global team that values thoughtful risk-taking and self-initiative. Whats in it for you: Lead data science strategy and build solutions at enterprise scale Grow and mentor a world-class team of data scientists and ML engineers Solve high-complexity, high-impact problems end-to-end Define roadmaps for emerging AI capabilities (GenAI, ASR, speaker analytics) Responsibilities: Shape and execute the Data Science roadmaptooling, methodologies, and best practices for ML, GenAI, and ASR Architect, develop, and deploy large-scale ML and GenAI pipelines-including Voice Activity Detection (VAD), Speaker Diarization, and ASR models-for automated transcription services Own the full data science lifecycleopportunity identification, requirement gathering, modeling, evaluation, validation, deployment, monitoring, and optimization Lead exploratory data analysis, proof-of-concepts, benchmarking, and production model validation for both text- and speech-based AI solutions Establish and enforce coding standards, perform technical reviews, and optimize workflows for reproducibility and scalability Drive MLOps practicesCI/CD for models, feature stores, monitoring, alerting, and automated rollback strategies Mentor and develop junior and mid-level data scientists; foster a culture of continuous learning, innovation, and collaboration Partner with cross-functional stakeholders (Engineering, Product, IT, Compliance) to align on project goals, timelines, and SLAs Technical : 8+ years of hands-on experience in Data Science/AI, with at least 3 years in a senior or leadership role with strong hands on skills. Proven expertise in developing and deploying ASR systems: Training or fine-tuning end-to-end ASR models (e.g., Whisper, QuartzNet) Designing VAD pipelines for robust speech segmentation Implementing Speaker Diarization (e.g., Pyannote, UISR) and handling multi-speaker audio Optimizing transcription accuracy across accents, languages, and noisy environments Deep knowledge of large language models (OpenAI, Anthropic, Llama), prompt engineering, fine-tuning, and embedding-based retrieval Expert proficiency in Python (NumPy, Pandas, SpaCy, scikit-learn, PyTorch/TF 2, Hugging Face Transformers) Strong understanding of ML & deep learning architecturesNLP transformers, GNNs, multimodal systems Solid grasp of statistics, probability, and the mathematics underpinning modern AI Track record of synthesizing cutting-edge research into production solutions Experience with orchestration and deployment toolsDocker, Kubernetes, Airflow, Redis, Flask/Django/FastAPI, PySpark, SQL, and cloud services (AWS/GCP/Azure) Openness to evaluate and adopt emerging technologies and languages as needed Good to have: Master's or Ph.D. in Computer Science, Electrical Engineering, Statistics, Mathematics, or related field Prior experience in the Economics/Financial industry, especially market-intelligence or risk analytics Public contributions on GitHub, Kaggle, StackOverflow, technical blogs, or publications Familiarity with speaker embedding techniques, speech enhancement, and noise-robust modeling Whats In It For You Our Purpose: Progress is not a self-starter. It requires a catalyst to be set in motion. Information, imagination, people, technologythe right combination can unlock possibility and change the world.Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. Our People: Our Values: Integrity, Discovery, Partnership At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. Benefits: We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & WellnessHealth care coverage designed for the mind and body. Continuous LearningAccess a wealth of resources to grow your career and learn valuable new skills. Invest in Your FutureSecure your financial future through competitive pay, retirement planning, a continuing education program with a company-matched student loan contribution, and financial wellness programs. Family Friendly PerksIts not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families. Beyond the BasicsFrom retail discounts to referral incentive awardssmall perks can make a big difference. For more information on benefits by country visithttps://spgbenefits.com/benefit-summaries Global Hiring and Opportunity at S&P Global: At S&P Global, we are committed to fostering a connected andengaged workplace where all individuals have access to opportunities based on their skills, experience, and contributions. Our hiring practices emphasize fairness, transparency, and merit, ensuring that we attract and retain top talent. By valuing different perspectives and promoting a culture of respect and collaboration, we drive innovation and power global markets. Recruitment Fraud Alert If you receive an email from a spglobalind.com domain or any other regionally based domains, it is a scam and should be reported to reportfraud@spglobal.com. S&P Global never requires any candidate to pay money for job applications, interviews, offer letters, pre-employment training or for equipment/delivery of equipment. Stay informed and protect yourself from recruitment fraud by reviewing our guidelines, fraudulent domains, and how to report suspicious activity here. ----------------------------------------------------------- Equal Opportunity Employer S&P Global is an equal opportunity employer and all qualified candidates will receive consideration for employment without regard to race/ethnicity, color, religion, sex, sexual orientation, gender identity, national origin, age, disability, marital status, military veteran status, unemployment status, or any other status protected by law. Only electronic job submissions will be considered for employment. If you need an accommodation during the application process due to a disability, please send an email to EEO.Compliance@spglobal.com and your request will be forwarded to the appropriate person. US Candidates Only The EEO is the Law Poster http://www.dol.gov/ofccp/regs/compliance/posters/pdf/eeopost.pdf describes discrimination protections under federal law. Pay Transparency Nondiscrimination Provision - https://www.dol.gov/sites/dolgov/files/ofccp/pdf/pay-transp_%20English_formattedESQA508c.pdf ----------------------------------------------------------- IFTECH103.2 - Middle Management Tier II (EEO Job Group)
Posted 1 month ago
2.0 - 7.0 years
4 - 9 Lacs
Noida
Work from Office
Backend Developer Database PostgreSQL PostGIS:For all PostgreSQLpsycopg2 asyncpg PostgreSQL adapters PythonGDAL OGR via osgeo module T Pandas, PAWS S3/RDS/EC2 olars tabular data manipulation, NumPy Django and Django REST GeoJSON, ShapefileGeoTIFF
Posted 1 month ago
1.0 - 3.0 years
2 - 5 Lacs
Gurugram
Work from Office
Role Objective We are currently seeking a Associate - BI Analytics to contribute to Analytics & BI Development process. This is a hands-on technical and Individual Contributor role. This position requires that the candidate having knowledge on Advance Excel, SQL, Pythonand support on Reporting and Analytics applications. The candidate needs to have a strong acumen on technical knowledge and be very structured and analytical in his/her approach. Essential Duties and Responsibilities Primary responsibility involves advanced SQL and advanced Excel report design and development. Publishing and scheduling SSIS\SSRS report as per the business requirements. Will be responsible for END TO END BI & Visualization solutions development and projects delivery across multiple clients. Drive the development and analysis of data, reporting automation, Dash boarding, and business intelligence programs. Would be supporting in consulting engagement and should be able to articulate and architect the solution effectively to bring-in the values which data analytics & visualization solution can deliver. Good understanding of database management systems and ETL (Extract, transform, load) framework Connecting to data sources, importing data and transforming data for Business Intelligence. Experience in using advance level calculations on the data set. Responsible for design methodology and project documentation. Able to properly understand the business requirements and develop data models accordingly by taking care of the resources. Should have knowledge and experience in prototyping, designing, and requirement analysis. Qualifications Graduate (BE-BTEC Computer science /BCA/MCA/MSc Computers) or have an equivalent academic qualification Good communication Skills (both written & verbal) Skill Set: Good to have: Academic exposure on design and develop relational database models/schemas and Query performance tuning and write ad-hoc SQL queries. Academic exposure to Snowflake would be ab added advantage Academic exposure with advanced query design, stored procedures, views, functions. Ability to communicate with technical and business resources at many levels in a manner that supports progress and success. Understanding of Python and different libraries - Pandas, Numpy etc Exposure to Cloud Computing such as Microsoft Azure Good knowledge/expertise on different versions of .Microsoft Sql Server 2008, 2012, 2016, 2017, 2020
Posted 1 month ago
3.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Title Technical Specialist Department Enterprise Engineering - Data Management Team Location Bangalore Level Grade 4 Introduction Were proud to have been helping our clients build better financial futures for over 50 years. How have we achieved this? By working together - and supporting each other - all over the world. So, join our data management platform team in Enterprise Engineering and feel like youre part of something bigger. About your team Enterprise Data Management Team has been formed to execute FILs data strategy and be a data-driven organization. The team would be responsible for providing standards and policies and manage central data projects working with data programmes of various business functions across the organization in a hub-and-spoke model. The capabilities of this team includes data cataloguing and data quality tooling. The team would also ensure the adoption of tooling, enforce the standards and deliver on foundational capabilities. About your role The successful candidate is expected to be a part of Enterprise Engineering team; and work on Data management platform. We are looking for a skilled Technical Specialist to join our dynamic team to build and deliver capabilities for data management platform to realise organisations data strategy. About you Key Responsibilities Create scalable solutions for data management, ensuring seamless integration with data sources, consistent metadata management, reusable data quality rules and framework. Develop robust APIs to facilitate the efficient retrieval and manipulation of data from a range of internal and external data sources. Integrate with diverse systems and platforms, ensuring data flows smoothly and securely between sources and our data management ecosystem. Design and implement self-service workflows to empower data role holders, enhancing accessibility and usability of the data management platform. Collaborate with product owner to understand requirements and translate them into technical solutions that promote data management and operational excellence. Work with data engineers within the team and guide them with technical direction and establishing coding best practices. Mentor junior team members, fostering a culture of continuous improvement and technical excellence. Work to implement devops pipelines and ensure smooth, automated deployment of data management solutions Monitor performance and reliability, proactively addressing issues and optimizing system performance. Stay up-to-date with emerging technologies, especially in GenAI, and incorporate advanced technologies to enhance existing frameworks and workflows Experience and Qualifications Required B.E./B.Tech. or M.C.A. in Computer Science from a reputed University 7+ years of relevant industry experience Experience of complete SDLC cycle Experience of working with multi-cultural and geographically disparate teams Essential Skills (Technical) Strong proficiency in Python, with a good understanding of its ecosystems. Experience with the Python libraries and frameworks such as Pandas, Requests, Flask, FastAPI, and web development concepts. Experience with RESTful APIs and microservices architecture. Deep understanding of AWS cloud services such as EC2, S3, Lambda, RDS, and experience in deploying and managing applications on AWS. Understanding of software development principles and design patterns. Candidate should have experience with Jenkins pipeline, hands on experience in writing testable code and unit testing Stay up to date with the latest releases and features to optimize system performance. Desirable Skills Experience Experience with database systems like Oracle, AWS RDS, DynamoDB Ability to implement test driven development Understanding of the Data Management concepts & its implementation using python Good knowledge of Unix scripting and windows platform Optimize data workflows for performance and efficiency. Ability to analyse complex problems in a structured manner and demonstrate multitasking capabilities. Personal Characteristics Excellent interpersonal and communication skills Self-starter with ability to handle multiple tasks and priorities Maintain a positive attitude that promotes teamwork within the company and a favourable image of the team Must have an eye for detail and analyse/relate to the business problem in hand Ability to develop & maintain good relationships with stakeholders Flexible and positive attitude, openness to change Self-motivation is essential, should demonstrate commitment to high quality solution Ability to discuss both business and related technology/system at various levels
Posted 1 month ago
8.0 - 12.0 years
27 - 42 Lacs
Pune
Work from Office
Role – Gen AI Engineer - Python Location - Hyderabad Mode of Interview - Virtual Interview Data - 5th June 2025 (Saturday) Job Description: Collect and prepare data for training and evaluating multimodal foundation models. This may involve cleaning and processing text data or creating synthetic data. Develop and optimize large-scale language models like GANs (Generative Adversarial Networks) and VAEs (Variational Autoencoders) Work on tasks involving language modeling, text generation, understanding, and contextual comprehension. Regularly review and fine-tune Large Language models to ensure maximum accuracy and relevance for custom datasets. Build and deploy AI applications on cloud platforms – any hyperscaler Azure, GCP or AWS. Integrate AI models with our company's data to enhance and augment existing applications. Role & Responsibility Handle data preprocessing, augmentation, and generation of synthetic data. Design and develop backend services using Python or .NET to support OpenAI-powered solutions (or any other LLM solution) Develop and Maintaining AI Pipelines Work with custom datasets, utilizing techniques like chunking and embeddings, to train and fine-tune models. Integrate Azure cognitive services (or equivalent platform services) to extend functionality and improve AI solutions Collaborate with cross-functional teams to ensure smooth deployment and integration of AI solutions. Ensure the robustness, efficiency, and scalability of AI systems. Stay updated with the latest advancements in AI and machine learning technologies. Skills & Experience Strong foundation in machine learning, deep learning, and computer science. Expertise in generative AI models and techniques (e.g., GANs, VAEs, Transformers). Experience with natural language processing (NLP) and computer vision is a plus. Ability to work independently and as part of a team. Knowledge of advanced programming like Python, and especially AI-centric libraries like TensorFlow, PyTorch, and Keras. This includes the ability to implement and manipulate complex algorithms fundamental to developing generative AI models. Knowledge of Natural language processing (NLP) for text generation projects like text parsing, sentiment analysis, and the use of transformers like GPT (generative pre-trained transformer) models. Experience in Data management, including data pre-processing, augmentation, and generation of synthetic data. This involves cleaning, labeling, and augmenting data to train and improve AI models. Experience in developing and deploying AI models in production environments. Knowledge of cloud services (AWS, Azure, GCP) and understanding of containerization technologies like Docker and orchestration tools like Kubernetes for deploying , managing and scaling AI solutions Should be able to bring new ideas and innovative solutions to our clients
Posted 1 month ago
4.0 - 5.0 years
3 - 5 Lacs
Vadodara
Work from Office
PRIMARY RESPONSIBILITIES: Understanding business objectives and developing models that help to achieve them, along with metrics to track their progress Managing available resources such as hardware, data, and personnel so that deadlines are met Analysing the ML algorithms that could be used to solve a given problem and ranking them by their success probability Exploring and visualizing data to gain an understanding of it, then identifying differences in data distribution that could affect performance when deploying the model in the real world Verifying data quality, and/or ensuring it via data cleaning Supervising the data acquisition process if more data is needed Finding available datasets online that could be used for training Defining validation strategies Defining the pre-processing or feature engineering to be done on a given dataset Defining data augmentation pipelines Training models and tuning their hyper parameters Analysing the errors of the model and designing strategies to overcome them Deploying models to production Key SKILL: Proficiency with a deep learning framework such as TensorFlow Proficiency with Python and basic libraries for machine learning such as scikit-learn, pandas and pytorch Expertise in visualizing and manipulating big datasets Deep knowledge of Math (Linear Algebra), CNN and algorithms Run models in cloud environments Google or Azur EXPERIENCE: 4-5 years of experience in machine learning related projects. EDUCATION AND EXPERIENCE REQUIREMENTS: Bachelors or Masters degree in a computer field LOCATION: Vadodara JOB TIMING: UK timings, 2:30pm to 11:30pm (March-October), 3:30pm to 00:30am (October-March)
Posted 1 month ago
5.0 - 9.0 years
0 - 1 Lacs
Pune, Chennai
Hybrid
Python developer 5 to 8 yrs Loc: Pune & Chennai Notice period: immediate to 30 days
Posted 1 month ago
8.0 - 13.0 years
25 - 40 Lacs
Pune
Hybrid
Role & responsibilities Lead the development of machine learning PoCs and demos using structured/tabular data for use cases such as forecasting, risk scoring, churn prediction, and optimization. Collaborate with sales engineering teams to understand client needs and present ML solutions during pre-sales calls and technical workshops. Build ML workflows using tools such as SageMaker, Azure ML, or Databricks ML and manage training, tuning, evaluation, and model packaging. Apply supervised, unsupervised, and semi-supervised techniques such as XGBoost, CatBoost, k-Means, PCA, time-series models, and more. Work with data engineering teams to define data ingestion, preprocessing, and feature engineering pipelines using Python, Spark, and cloud-native tools. Package and document ML assets so they can be scaled or transitioned into delivery teams post-demo. Stay current with best practices in ML explainability , model performance monitoring , and MLOps practices. Participate in internal knowledge sharing, tooling evaluation, and continuous improvement of lab processes. Preferred candidate profile 8+ years of experience developing and deploying classical machine learning models in production or PoC environments. Strong hands-on experience with Python , pandas , scikit-learn , and ML libraries such as XGBoost, CatBoost , LightGBM, etc. Familiarity with cloud-based ML environments such as AWS SageMaker , Azure ML , or Databricks . Solid understanding of feature engineering, model tuning, cross-validation, and error analysis . Experience with unsupervised learning , clustering, anomaly detection, and dimensionality reduction techniques. Comfortable presenting models and insights to technical and non-technical stakeholders during pre-sales engagements. Working knowledge of MLOps concepts , including model versioning, deployment automation, and drift detection.
Posted 1 month ago
7.0 - 12.0 years
20 - 30 Lacs
Bengaluru
Work from Office
Job Title: Data Analyst Location: Bangalore Experience: 6+ years Role Overview We are seeking a skilled Data Analyst to support our platform powering operational intelligence across airports and similar sectors. The ideal candidate will have experience working with time-series datasets and operational information to uncover trends, anomalies, and actionable insights. This role will work closely with data engineers, ML teams, and domain experts to turn raw data into meaningful intelligence for business and operations stakeholders. Key Responsibilities Analyze time-series and sensor data from various sources Develop and maintain dashboards, reports, and visualizations to communicate key metrics and trends. Correlate data from multiple systems (vision, weather, flight schedules, etc) to provide holistic insights. Collaborate with AI/ML teams to support model validation and interpret AI-driven alerts (e.g., anomalies, intrusion detection). Prepare and clean datasets for analysis and modeling; ensure data quality and consistency. Work with stakeholders to understand reporting needs and deliver business-oriented outputs. Qualifications & Required Skills Bachelors or Master’s degree in Data Science, Statistics, Computer Science, Engineering, or a related field. 5+ years of experience in a data analyst role, ideally in a technical/industrial domain. Strong SQL skills and proficiency with BI/reporting tools (e.g., Power BI, Tableau, Grafana). Hands-on experience analyzing structured and semi-structured data (JSON, CSV, time-series). Proficiency in Python or R for data manipulation and exploratory analysis. Understanding of time-series databases or streaming data (e.g., InfluxDB, Kafka, Kinesis). Solid grasp of statistical analysis and anomaly detection methods. Experience working with data from industrial systems or large-scale physical infrastructure. Good-to-Have Skills Domain experience in airports, smart infrastructure, transportation, or logistics. Familiarity with data platforms (Snowflake, BigQuery, Custom-built using open-source). Exposure to tools like Airflow, Jupyter Notebooks and data quality frameworks. Basic understanding of AI/ML workflows and data preparation requirements.
Posted 1 month ago
6.0 - 8.0 years
15 - 30 Lacs
Pune
Hybrid
6+ years of experience developing and deploying classical machine learning models in production or PoC environments. Strong hands-on experience with Python , pandas , scikit-learn , and ML libraries such as XGBoost, CatBoost , LightGBM, etc. Familiarity with cloud-based ML environments such as AWS SageMaker , Azure ML , or Databricks . Solid understanding of feature engineering, model tuning, cross-validation, and error analysis . Experience with unsupervised learning , clustering, anomaly detection, and dimensionality reduction techniques. Comfortable presenting models and insights to technical and non-technical stakeholders during pre-sales engagements. Working knowledge of MLOps concepts , including model versioning, deployment automation, and drift detection. Role & responsibilities
Posted 1 month ago
6.0 - 8.0 years
8 - 10 Lacs
Noida
Work from Office
Should be having 6 to 8 years of industry experience. Strong with libraries like Pandas, Numpy, Scipy etc. OR should have good exposure on file handling in Python Strong querying data from database(i.e. oracle etc.) using database query language like SQL, MYSQL Strong basic python concepts (like list, tuple, dictionary etc. ) and Object orient programming in Python (like classes etc.) Should have knowledge of at least one python framework (like flask etc.) Experience with Linux/Unix environment along with knowledge of basic Linux commands (i.e cut, sort, uniq etc.) Mandatory Competencies Python - Numpy Python - Panda Python - Python UI - Angular 2+ Others - Micro services Java - Core JAVA Database - SQL Database - MySQL Java - Unix Java - Linux Beh - Communication and collaboration
Posted 1 month ago
4.0 - 9.0 years
15 - 22 Lacs
Pune, Mumbai (All Areas)
Hybrid
Job Description: 4+ years of Total experience Strong technical, analytical, troubleshooting, and communication skills An extensive experience with Python, Pandas, Numpy and OOPS Proficiency with object-oriented design principles Working experience on APIs is a must Experience with Protobuf implementation in APIs is an add-on
Posted 1 month ago
6.0 - 11.0 years
15 - 30 Lacs
Gurugram
Work from Office
Summary: Translate business requirements into specifications to implement workflow testing from potentially multiple data sources using Python solutions. Develop understanding of data structures including large volume & complex databases and implement automation solutions leveraging Python for quick data comparisons and exception analysis. Work in finance processes and systems. Skills Required: Degree qualification in information technology Minimum 4-5 years of post-qualification experience Strong knowledge of Python and other testing tools Comprehensive knowledge of automation testing Knowledge of Git Proficiency with various frameworks and understanding of API features Knowledge of various databases Familiarity with the architecture and features of different types of applications Understanding of CI/CD Intake call notes: The position involves contributing to the development of a fully automated product, with responsibilities centered around coding, testing automation, and architecture rather than manual testing. The ideal candidate will have strong expertise in Python, with hands-on experience in both backend development and testing frameworks. Proficiency with Django and API development is required, along with a solid understanding of building data-heavy applications and UI integration. This is a senior-level role suited for technically strong individuals with at least 8 years of experience (though we are open to less tenured but highly skilled candidates). We're seeking someone with a strong engineering background who can take ownership, contribute to architectural decisions, and collaborate closely with the development team. Candidates with prior managerial experience are welcome, provided they remain technically hands-on. Domain background is flexible, as long as the candidate brings a solid foundation in software engineering and a product-building mindset. If interested, please share your resume to sunidhi.manhas@portraypeople.com
Posted 1 month ago
3.0 - 7.0 years
9 - 13 Lacs
Noida
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. At Optum AI, we leverage data and resources to make a significant impact on the healthcare system. Our solutions have the potential to improve healthcare for everyone. We work on cutting-edge projects involving ML, NLP, and LLM techniques, continuously developing and improving generative AI methods for structured and unstructured healthcare data. Our team collaborates with world-class experts and top universities to develop innovative AI/ML solutions, often leading to patents and published papers. Primary Responsibilities: Assist in developing and implementing machine learning models for healthcare applications Work with structured and unstructured data to build predictive and analytical models Collaborate with cross-functional teams including data engineers, product managers, and domain experts Support the development and maintenance of data pipelines for model training and inference Contribute to the deployment and monitoring of ML models in production environments Apply standard ML and NLP techniques such as classification, entity recognition, and text summarization Use tools and frameworks like TensorFlow, PyTorch, Scikit-learn, and Hugging Face for model development Write clean, efficient code in Python and use version control tools like Git Document model development processes and results clearly for both technical and non-technical stakeholders Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelors degree in Computer Science, Engineering, Data Science, or a related field 2+ years of experience in data science, machine learning, or analytics Hands-on experience with ML and NLP techniques such as supervised learning, deep learning, and text classification Proficiency in Python and familiarity with libraries like Pandas, NumPy, Scikit-learn, and TensorFlow or PyTorch Familiarity with data processing tools such as PySpark or SQL Proven solid problem-solving skills and ability to work in a collaborative environment Preferred Qualifications: Experience working with cloud platforms (e.g., AWS, Azure, or GCP) Understanding of APIs and basic web frameworks like Flask or FastAPI At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #njp #SStech #SSF&A
Posted 1 month ago
5.0 - 8.0 years
8 - 12 Lacs
Chennai
Work from Office
Position: Senior Python Developer Experience: 5-8 Years Location: Chennai About the Role: We are looking for someone who is passionate about writing clean, high-performance code and is excited by data-driven applications. The ideal candidate will bring strong expertise in Python and modern data processing libraries, with a proven track record of building and scaling backend systems. This role is perfect for someone who is proactive, quick to learn, and thrives in a fast-paced environment. Key Responsibilities: Design, develop, and maintain backend systems and APIs using Python and FastAPI. Work with data-centric libraries like pandas , polars , and numpy to build scalable data pipelines and services. Implement and manage asynchronous task scheduling using FastAPI schedulers or similar tools. Contribute to architectural decisions and mentor junior developers. Participate in code reviews, testing, and good at troubleshooting. Work closely with data teams and business stakeholders to understand requirements and deliver effective solutions. Continuously learn and adapt to new technologies and best practices. Must-Have Qualifications: 5-8 years of professional experience with Python . Proficient in pandas , polars , and numpy . Strong experience building APIs with FastAPI or similar frameworks. Hands-on experience with task scheduling and background job management . Excellent problem-solving and communication skills. A passion for learning and the ability to pick up new technologies quickly. Good to Have: Experience with C# in enterprise environments. Strong knowledge of SQL and experience with relational databases (e.g., PostgreSQL, SQL Server). Familiarity with Docker, data versioning, job orchestration tools, or cloud platforms.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough