Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 3.0 years
4 - 5 Lacs
Ahmedabad
Work from Office
About Us: Founded in 2008, Red & White is Gujarats leading NSDC & ISO-certified institute, focused on industry-relevant education and global employability. Role Overview: Were hiring a faculty member to teach AI, Machine Learning, and Data Science. The role includes delivering lectures, guiding projects, mentoring students, and staying updated with tech trends. Key Responsibilities: Deliver high-quality lectures on AI, Machine Learning, and Data Science . Design and update course materials, assignments, and projects. Guide students on hands-on projects, real-world applications, and research work. Provide mentorship and support for student learning and career development. Stay updated with the latest trends and advancements in AI/ML and Data Science. Conduct assessments, evaluate student progress, and provide feedback. Participate in curriculum development and improvements. Skills & Tools: Core Skills: ML, Deep Learning, NLP, Computer Vision, Business Intelligence, AI Model Development, Business Analysis. Programming: Python, SQL (Must), Pandas, NumPy, Excel. ML & AI Tools: Scikit-learn (Must), XGBoost, LightGBM, TensorFlow, PyTorch (Must), Keras, Hugging Face. Data Visualization: Tableau, Power BI (Must), Matplotlib, Seaborn, Plotly. NLP & CV: Transformers, BERT, GPT, OpenCV, YOLO, Detectron2. Advanced AI: Transfer Learning, Generative AI, Business Case Studies. Education & Experience Requirements: Bachelor's/Masters/Ph.D. in Computer Science, AI, Data Science, or a related field. Minimum 1+ years of teaching or industry experience in AI/ML and Data Science. Hands-on experience with Python, SQL, TensorFlow, PyTorch, and other AI/ML tools. Practical exposure to real-world AI applications, model deployment, and business analytics. For further information, please feel free to contact 7862813693 us via email at career@rnwmultimedia.edu.in
Posted 1 month ago
2.0 - 4.0 years
5 - 12 Lacs
Bengaluru
Hybrid
Primary Function Hands on in Python scripting language. A deep understanding of Data structures is a must. Basic Java Knowledge is required. Strong Analytical and Troubleshooting skills. Good communication skills and team player qualities. Passionate and self-motivated Willingness to learn and improve rapidly Good to have Familiarity with server-side templating languages including Jinja 2 and Mako. Good with testing tools and familiarity with TDD (Test Driven developement). Knowledge on any framework like selenium / requestium / Django / Flask is a plus. Knowledge on AWS, Docker, Gitlab, Numpy, Pandas, MySQL, Mongodb, ElasticSearch is a plus Domain experience in Fintech is preferred, but not mandatory. Familiarity with Go and Mongodb Qualifications & Competency Bachelor's degree in computer science, computer engineering, or related field. 2.5 to 4 years of experience as a Python developer Reporting to Technical Lead
Posted 1 month ago
5.0 - 10.0 years
30 - 40 Lacs
Gurugram
Work from Office
Python Developer Location:Gurgaon Time Zone: Work from Office Duration: Full-time Requirements: - 57 years of backend development experience. - Refactoring legacy applications. - Strong ANSI SQL and DB interaction. - Experience with Git, CI/CD, and Agile methodologies. - Python 3.x, Pandas, SQLAlchemy, PyODBC. - Data validation, integration, and test automation scripting
Posted 1 month ago
2.0 - 3.0 years
10 - 18 Lacs
Chennai
Work from Office
Roles and Responsibilities: Collect and curate data based on specific project requirements. Perform data cleaning, preprocessing, and transformation for model readiness. Select and implement appropriate data models for various applications. Continuously improve model accuracy through iterative learning and feedback loops. Fine-tune large language models (LLMs) for applications such as code generation and data handling. Apply geometric deep learning techniques using PyTorch or TensorFlow. Essential Requirements: Strong proficiency in Python, with experience in writing efficient and clean code. Ability to process and transform natural language data for NLP applications. Solid understanding of modern NLP techniques such as Transformers, Word2Vec, BERT, etc. Strong foundation in mathematics and statistics relevant to machine learning and deep learning. Hands-on experience with Python libraries including NumPy, Pandas, SciPy, Scikit-learn, NLTK, etc. Experience in various data visualization techniques using Python or other tools. Working knowledge of DBMS and fundamental data structures. Familiarity with a variety of ML and optimization algorithms.
Posted 1 month ago
4.0 - 8.0 years
12 - 22 Lacs
Jaipur
Remote
Hi Folks I hope you all are doing well ! We are hiring for the one of the leading IT industry across the world for Sr, Data/Python Engineer role where we are looking for the person who is expertise in p ython, pandas/streamlit, SQL, Pyspark. Job Description: Job Title: Sr. Data/Python Engineer Location: Pan India- Remote Job Type: Full Time Job Summary: We are seeking a skilled and collaborative Sr. Data/Python Engineer with experience in the development of production Python-based applications (Such as Pandas, Numpy, Django, Flask, FastAPI on AWS) to support our data platform initiatives and application development. This role will initially focus on building and optimizing Streamlit application development frameworks, CI/CD Pipelines, ensuring code reliability through automated testing with Pytest , and enabling team members to deliver updates via CI/CD pipelines . Once the deployment framework is implemented, the Sr Engineer will own and drive data transformation pipelines in dbt and implement a data quality framework. Key Responsibilities: Lead application testing and productionalization of applications built on top of Snowflake - This includes implementation and execution of unit testing and integration testing - Automated test suites include use of Pytest and Streamlit App Tests to ensure code quality, data accuracy, and system reliability. Development and Integration of CI/CD pipelines (e.g., GitHub Actions, Azure DevOps, or GitLab CI) for consistent deployments across dev, staging, and production environments. Development and testing of AWS-based pipelines - AWS Glue, Airflow (MWAA), S3 Design, develop, and optimize data models and transformation pipelines in Snowflake using SQL and Python. Build Streamlit-based applications to enable internal stakeholders to explore and interact with data and models. Collaborate with team members and application developers to align requirements and ensure secure, scalable solutions. Monitor data pipelines and application performance, optimizing for speed, cost, and user experience. Create end-user technical documentation and contribute to knowledge sharing across engineering and analytics teams. Work in CST hours and collaborate with onshore and offshore teams. Required Skills and Experience: 4+ years of experience in Data Engineering or Python based application development on AWS (Pandas, Flask, Django, FastAPI, Streamlit) - Experience building data data-intensive applications on python as well as data pipelines on AWS in a must. Strong in python and Pandas Proficient in SQL and Python for data manipulation and automation tasks. Experience with developing and productionalizing applications built on Python based Frameworks such as FastAPI, Django, Flask (Strong Python Pandas,Flask, Django, FastAPI, or Streamlit experience Experience with application frameworks such as Streamlit, Angular, React etc for rapid data app deployment. Solid understanding of software testing principles and experience using Pytest or similar Python frameworks. Experience configuring and maintaining CI/CD pipelines for automated testing and deployment. Familiarity with version control systems such as Gitlab . Knowledge of data governance, security best practices, and role-based access control (RBAC) in Snowflake. Preferred Qualifications: Experience with dbt (data build tool) for transformation modeling. Knowledge of Snowflakes advanced features (e.g., masking policies, external functions, Snowpark). Exposure to cloud platforms (e.g., AWS, Azure, GCP). Strong communication and documentation skills. Interested Candidate share their resume on sweta@talentvidas.com
Posted 1 month ago
4.0 - 8.0 years
5 - 11 Lacs
Hyderabad
Hybrid
We are seeking a skilled and detail-oriented Python + SQL Consultant to join our team. The ideal candidate will have strong expertise in data manipulation, analysis, and automation using Python and SQL. You will work closely with data engineers, analysts, and business stakeholders to deliver high-quality data solutions and insights. Key Responsibilities: Design, develop, and maintain data pipelines using Python and SQL. Write efficient, optimized SQL queries for data extraction, transformation, and reporting. Automate data workflows and integrate APIs or third-party services. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. Perform data validation, cleansing, and quality checks. Develop dashboards or reports using BI tools (optional, if applicable). Document processes, code, and data models for future reference. Required Skills & Qualifications: Strong proficiency in Python (Pandas, NumPy, etc.). Advanced knowledge of SQL (joins, subqueries, CTEs, window functions). Experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Familiarity with version control systems like Git. Strong problem-solving and analytical skills. Excellent communication and collaboration abilities. Preferred Qualifications: Experience with cloud platforms (AWS, Azure, GCP). Familiarity with data visualization tools (e.g., Power BI, Tableau). Knowledge of ETL tools or frameworks (e.g., Airflow, dbt). Background in data warehousing or big data technologies. Education: Bachelors or Master’s degree in Computer Science, Information Systems, Engineering, or a related field.
Posted 1 month ago
4.0 - 6.0 years
3 - 7 Lacs
Kolkata, Pune, Chennai
Work from Office
Company:Kiya.ai Client:TCS Work Mode:WFO (5 days) Work Location: ~KOLKATA,WB || CHENNAI,TN || PUNE, MH ** Interested candidates drop your resume to saarumathi.r@kiya.ai or reach me at (+91) 8946064544 Role & responsibilities Job Title: Developer Work Location: ~KOLKATA,WB || CHENNAI,TN || PUNE, MH Skill Required: Digital : Python~Digital : Amazon Web Service(AWS) Cloud Computing Experience Range in Required Skills: 4-6 Years Job Description: Language: Python Cloud Platform: AWS (Lambda, EC2, S3)DevOps Tools: GitLab Data / ML: NumPy, Pandas3+ years Hands experience for Developer Essential Skills: Language: Python Cloud Platform: AWS (Lambda, EC2, S3)DevOps Tools: GitLab Data / ML: NumPy, Pandas3+ years Hands experience for Developer
Posted 1 month ago
8.0 - 13.0 years
25 - 30 Lacs
Hyderabad
Work from Office
Job Title: Data Scientist. Job Type: Full-time, Contractor. Job Summary:. We are seeking an innovative and highly skilled Data Scientist to join our customer’s team. In this dynamic role, you will design and deploy advanced Generative AI and LLM solutions, driving impactful real-world applications. Your expertise will directly shape our customer’s AI products, working in a collaborative, forward-thinking environment.. Key Responsibilities:. Develop, fine-tune, and deploy Large Language Models (LLMs) and Generative AI applications for production environments.. Engineer robust RAG (Retrieval Augmented Generation) pipelines utilizing vector stores such as FAISS, Pinecone, or Weaviate.. Implement and optimize prompt engineering strategies for enhanced performance of commercial and open-source LLMs (OpenAI, Anthropic, Hugging Face, etc.).. Integrate NLP techniques and transformer architectures to boost customer’s product capabilities.. Collaborate cross-functionally to define AI-driven product features and recommend technical enhancements.. Write clean, efficient Python code while adhering to best practices in code quality and documentation.. Communicate technical concepts effectively in both written and verbal forms with stakeholders and team members.. Required Skills and Qualifications:. Strong programming proficiency in Python for AI/ML projects.. Hands-on experience with LLM frameworks such as LangChain or Haystack.. Proficient in designing and deploying RAG pipelines, including leveraging vector database technologies (e.g., Redis, FAISS, Pinecone, Weaviate).. Deep understanding of NLP fundamentals and transformer models.. Demonstrated experience with both commercial and open-source LLMs.. Expertise in prompt engineering and embeddings.. Exceptional written and verbal communication abilities, with an emphasis on clarity and collaboration.. Preferred Qualifications:. Experience deploying GenAI applications at scale in production environments.. Background in designing customer-facing AI solutions.. Prior experience collaborating in hybrid or distributed teams.. Show more Show less
Posted 1 month ago
8.0 - 13.0 years
25 - 30 Lacs
Bengaluru
Work from Office
Job Title: Data Scientist – OpenCV. Experience: 2–3 Years. Location: Bangalore. Notice Period: Immediate Joiners Only. Job Overview. We are looking for a passionate and driven Data Scientist with a strong foundation in computer vision, image processing, and OpenCV. This role is ideal for professionals with 2–3 years of experience who are excited about working on real-world visual data problems and eager to contribute to impactful projects in a collaborative environment.. Key Responsibilities. Develop and implement computer vision solutions using OpenCV and Python.. Work on tasks including object detection, recognition, tracking, and image/video enhancement.. Clean, preprocess, and analyze large image and video datasets to extract actionable insights.. Collaborate with senior data scientists and engineers to deploy models into production pipelines.. Contribute to research and proof-of-concept projects in the field of computer vision and machine learning.. Prepare clear documentation for models, experiments, and technical processes.. Required Skills. Proficient in OpenCV and image/video processing techniques.. Strong coding skills in Python, with familiarity in libraries such as NumPy, Pandas, Matplotlib.. Solid understanding of basic machine learning and deep learning concepts.. Hands-on experience with Jupyter Notebooks; exposure to TensorFlow or PyTorch is a plus.. Excellent analytical, problem-solving, and debugging skills.. Effective communication and collaboration abilities.. Preferred Qualifications. Bachelor’s degree in computer science, Data Science, Electrical Engineering, or a related field.. Practical exposure through internships or academic projects in computer vision or image analysis.. Familiarity with cloud platforms (AWS, GCP, Azure) is an added advantage.. What We Offer. A dynamic and innovation-driven work culture.. Guidance and mentorship from experienced data science professionals.. The chance to work on impactful, cutting-edge projects in computer vision.. Competitive compensation and employee benefits.. Show more Show less
Posted 1 month ago
5.0 - 9.0 years
9 - 13 Lacs
Hyderabad
Work from Office
Job Summary. ServCrust is a rapidly growing technology startup with the vision to revolutionize India's infrastructure. by integrating digitization and technology throughout the lifecycle of infrastructure projects.. About The Role. As a Data Science Engineer, you will lead data-driven decision-making across the organization. Your. responsibilities will include designing and implementing advanced machine learning models, analyzing. complex datasets, and delivering actionable insights to various stakeholders. You will work closely with. cross-functional teams to tackle challenging business problems and drive innovation using advanced. analytics techniques.. Responsibilities. Collaborate with strategy, data engineering, and marketing teams to understand and address business requirements through advanced machine learning and statistical models.. Analyze large spatiotemporal datasets to identify patterns and trends, providing insights for business decision-making.. Design and implement algorithms for predictive and causal modeling.. Evaluate and fine-tune model performance.. Communicate recommendations based on insights to both technical and non-technical stakeholders.. Requirements. A Ph.D. in computer science, statistics, or a related field. 5+ years of experience in data science. Experience in geospatial data science is an added advantage. Proficiency in Python (Pandas, Numpy, Sci-Kit Learn, PyTorch, StatsModels, Matplotlib, and Seaborn); experience with GeoPandas and Shapely is an added advantage. Strong communication and presentation skills. Show more Show less
Posted 1 month ago
1.0 - 5.0 years
10 - 14 Lacs
Mumbai
Work from Office
We are seeking an experienced and motivated Data Scraper / Lead Generator to join our fast-growing team in Mumbai. The ideal candidate will have a strong background in generating leads through web scraping and online research, specifically targeting the Europe, UK, USA and other international markets.. Key Responsibilities:. Conduct in-depth online research to identify potential leads in targeted geographies. Use advanced web scraping tools and techniques to extract accurate contact and business data from various sources.. Validate and verify collected data to ensure quality and relevance.. Maintain and manage a structured database of leads for outreach and tracking.. Collaborate closely with the sales and marketing teams to deliver a steady pipeline of high-quality leads.. Stay up to date with industry trends, tools, and best practices in data scraping and lead generation.. Requirements:. Proven experience in data scraping lead generation, especially in international markets (UK preferred).. Proficiency in web scraping tools and methods (e.g., Python/BeautifulSoup, Scrapy, Octoparse, or similar).. Strong attention to detail, organizational skills, and data accuracy.. Ability to manage time efficiently and handle multiple tasks.. Excellent communication and coordination skills.. Preferred:. Immediate availability or short notice period.. Show more Show less
Posted 1 month ago
2.0 - 4.0 years
8 - 12 Lacs
Pune
Work from Office
FlexTrade Systems is a provider of customized multi-asset execution and order management trading solutions for buyand sell-side financial institutions. Through deep client partnerships with some of the world's largest, most complex and demanding capital markets firms, we develop the flexible tools, technology and innovation that deliver our clients a competitive edge. Our globally distributed engineering teams focus on adaptable technology and open architecture to develop highly sophisticated trading solutions that can automate and scale with your business strategies.. At FlexTrade, we hold our values close to heart, with pride and gratitude, as they guide us in everything that we do. We are dedicated to giving our clients a competitive edge, taking ownership of our responsibilities, being flexible to adapt to ever changing environment and technology, bringing integrity to ever interaction and we continue to improve, grow together and collaborate as one team. All of these while having Fun truly makes FlexTrade a wonderful place to work.. About you:. Data Engineer (Python / SQL) –FlexTCA Join a dynamic FlexTCA (FlexTrade Transaction Cost Analysis) development team. FlexTrade is looking for a Data Engineer to be a part of a rapidly evolving technology group supplying top quality solutions to our growing top tier global client base. This role offers the opportunity to work closely with data scientists and data engineers within FlexTrade’s Quantitative Solutions team, as well as exposure to cutting edge analytics and machine learning technologies.. The Product:. FlexTCA is our Post & Pre-Trade transaction cost analysis and execution quality management solution offering historical and real-time analytics for trading portfolios and single securities across global equities, FX, futures and fixed income. FlexTCA is used by investment managers and brokerages to analyze, evaluate, and improve trader, algo, broker, and venue performance. The product includes an intuitive and flexible web interface for data visualization, exploration, and analysis.. CORE RESPONSIBILITIES:. The candidate will be responsible for design, implementation, and maintenance of FlexTCA’s data processing, data management, and BI tooling.. They will also be asked to contribute to original research on FlexTrade’s proprietary cost models.. The product exposure offers the opportunity for in-depth learning as it relates to both trading and analytics technologies, as well as the associated development life cycle.. Key Skills. Should be able to write queries to facilitate data warehouse integrity checks. The individual will contribute to the design, development, and management of order, execution, and market data along with bucketed timeseries ticks for various assets classes.. Create and Maintain Extract, Transform, and Load (ETL) workflows for a range of datasets.. Write and maintain API interfaces with third party data vendors.. Will be responsible for all Production, QA and Dev ETL and data integrity checks on a Linux environment.. Provide automation support and backup to DBA team.. Help automate Business Intelligence environment maintenance and visualization roll-out using native Python wrapper.. Bachelor’s Degree in Computer Science or equivalent industry experience acceptable. 2+ years experience required.. Excellent SQL skills are required.. Strong experience Python and UNIX / Shell scripting.. 2+ Years Experience with any business intelligence tool, optimally Sisense, will be strongly preferred.. 2+ Years Experience with numpy / pandas .. 2+ years experience conducting trading research, understanding sampling, validation, and statistics .. Excellent communication and problem-solving skills. Show more Show less
Posted 1 month ago
3.0 - 6.0 years
4 - 9 Lacs
Bengaluru
Work from Office
Foundational knowledge in Pandas and NumPy, expertise in OCR data extraction, and proficiency in exception handling. The candidate should be well-versed in basic data structures such as lists and dictionaries, as well as file handling techniques. Additionally, basic skills in Excel automation and experience with using APIs or databases for data extraction via SQL or LINQ are essential. The ideal candidate should possess strong logical reasoning abilities and demonstrate keen attention to detail. Looking for experts in python for automation. Their key responsibilities will include understanding existing processes, identifying opportunities for optimization, and developing automation solutions using Python. Tableau as an additional skill is an added bonus. NOTE: Face to face is mandatory
Posted 1 month ago
4.0 - 7.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Date 19 Jun 2025 Location: Bangalore, IN Company Alstom At Alstom, we understand transport networks and what moves people. From high-speed trains, metros, monorails, and trams, to turnkey systems, services, infrastructure, signalling and digital mobility, we offer our diverse customers the broadest portfolio in the industry. Every day, 80,000 colleagues lead the way to greener and smarter mobility worldwide, connecting cities as we reduce carbon and replace cars. OVERALL PURPOSE OF THE ROLE He/She will act as an Anaplan expert and ensure the Platform is suitable for user adoption as per the Business requirements and also compliant with security Alstom standards at all times. He/She will play a crucial role in implementing an architecture that is optimized for performance & storage. He/She is expected to lead/coordinate end to end Delivery on the Projects/demands. In addition, He/She will be responsible for tracking the users and manage licenses to ensure compliance with the contractual objectives. STRUCTURE, REPORTING, NETWORKS & LINKS Organization Structure CITO |-- VP Data & AI Governance |-- Enterprise Data Domain Director |--Head of Analytics Platform |--Analytics Delivery Architect |--Analytics Technical Analyst Organizational Reporting: Reports to Head of Analytics Platform.. Networks & Links InternallyDigital Platforms Team, InnovationTeam, ApplicationPlatform Owners, Business process owners, Infrastructure team ExternallyThird-party technology providers, Strategic Partners Location Position will be based in Bangalore RESPONSIBILITIES - Design, develop, and deploy interactive dashboards and reports using MS Fabric & Qlik Cloud (Good to Have), ensuring alignment with business requirements and goals. Implement and manage data integration workflows utilizing MS Fabric to ensure efficient data processing and accessibility. Use Python scripts to automate data cleaning and preprocessing tasks for Data models. Understand and integrate Power BI reports into other applications using embedded analytics like Power BI service (SaaS), Teams, SharePoint or by API automation. Will be responsible for access management of app workspaces and content. Integration of PowerBi servers with different data sources and timely upgradation/services of PowerBi Able to schedule and refresh jobs on Power BI On-premise data gateway. Configure standard system reports, as well as customized reports as required. Responsible in helping various kind of database connections (SQL, Oracle, Excel etc.) with Power BI Services Investigate and troubleshoot reporting issues and problems Maintain reporting schedule and document reporting procedures Monitor and troubleshoot data flow issues, optimizing the performance of MS Fabric applications as needed. Ensure collaboration with Functional & Technical Architectsasbusiness cases are setup for each initiative,collaborate with other analytics team to drive and operationalize analytical deployment. Maintain clear and coherent communication, both verbal and written, to understand data needs and report results. Ensure compliance with internal policies and regulations Strong ability to take the lead and be autonomous Proven planning, prioritization, and organizational skills.Ability to drive change through innovation & process improvement. Be able to report to management and stakeholders in a clear and concise manner. Good to havecontribution to the integration and utilization of Denodo for data virtualization, enhancing data access across multiple sources. Facilitate effective communication with stakeholders regarding project updates, risks, and resolutions to ensure transparency and alignment. Participate in team meetings and contribute innovative ideas to improve reporting and analytics solutions. EDUCATION Bachelors/Masters degree in Computer Science Engineering /Technology or related field Experience Minimum 3 and maximum 5 years of total experience Mandatory 2+ years of experience in MS Fabric Power BI End-to-End Development using Power BI Desktop connecting multiple data sources (SAP, SQL, Azure, REST APIs, etc.) Handson Experience in Python, R, and SQL for data manipulation, analysis, data pipeline and database interaction. Experience or Knowledge in using PySpark or Jupiter Notebook for data cleaning, transformation, exploration, visualization, and building data models on large datasets. Technical competencies Proficient in using MS Fabric for data integration and automation of ETL processes. Knowlege in using Pyspark Modules for Data Modelling (Numpy, Panda) Handson & in using Python , R Programming language for data processing. Understanding of data governance principles for quality and security. Strong expertise in creating dashboards and reports using Power BI and Qlik. Knowledge of data modeling concepts in Qlik and Power BI. Proficient in writing complex SQL queries for data extraction and analysis. Skilled in utilizing analytical functions in Power BI and Qlik. Experience in Developing visual reports, dashboards and KPI scorecards using Power BI desktop & Qlik Hands on PowerPivot, Role based data security, Power Query, Dax Query, Excel, Pivots/Charts/grid and Power View Good to have Power BI Services and Administration knowledge. Important to note As a global business, were an equal-opportunity employer that celebrates diversity across the 63 countries we operate in. Were committed to creating an inclusive workplace for everyone.
Posted 1 month ago
3.0 - 5.0 years
3 - 6 Lacs
Mumbai
Work from Office
Paramatrix Technologies Pvt. Ltd is looking for Junior AL/ML to join our dynamic team and embark on a rewarding career journey We are seeking a highly skilled and motivated Machine Learning Engineer to join our dynamic team The Machine Learning Engineer will be responsible for designing, developing, and deploying machine learning models to solve complex problems and enhance our products or services The ideal candidate will have a strong background in machine learning algorithms, programming, and data analysis Responsibilities:Problem Definition:Collaborate with cross-functional teams to define and understand business problems suitable for machine learning solutions Translate business requirements into machine learning objectives Data Exploration and Preparation:Analyze and preprocess large datasets to extract relevant features for model training Address data quality issues and ensure data readiness for machine learning tasks Model Development:Develop and implement machine learning models using state-of-the-art algorithms Experiment with different models and approaches to achieve optimal performance Training and Evaluation:Train machine learning models on diverse datasets and fine-tune hyperparameters Evaluate model performance using appropriate metrics and iterate on improvements Deployment:Deploy machine learning models into production environments Collaborate with DevOps and IT teams to ensure smooth integration Monitoring and Maintenance:Implement monitoring systems to track model performance in real-time Regularly update and retrain models to adapt to evolving data patterns Documentation:Document the entire machine learning development pipeline, from data preprocessing to model deployment Create user guides and documentation for end-users and stakeholders Collaboration:Collaborate with data scientists, software engineers, and domain experts to achieve project goals Participate in cross-functional team meetings and knowledge-sharing sessions
Posted 1 month ago
3.0 - 5.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Position Purpose Overall 3-5 years of experience as a Jr Python Developer in delivery of IT Projects and preferably in the area of Python.. The Developer should have key skills as mentioned below: 1- Strong experience to manage the end to end cycle, knowledge on Financial Market is an advantage 2 Good experience in the areas of Python, SQL server in terms of database design, performance improvement, SQL 3- Participate in Design / Architecture discussions in building new systems, Frameworks and Components 4- Sound knowledge of Agile (Scrum/Kanban) Responsibilities Direct Responsibilities Goto person to find solutions to any technical challenges in the domain. Good Hands on experience in Python. Resolve performance bottlenecks. Participate in POCs and technical feasibility studies. Keep up-to-date with latest technologies, trends and provide inputs, expertise and recommendations. Contributing Responsibilities Contribute towards innovation (e.g. AI/ML); suggest new technical practices for efficiency improvement. Contribute towards recruitment. Level-up of members in the vertical. Technical Behavioral Competencies Resourceful to quickly understand complexities involved and provide the way forward. Good experience in technical analysis of n-tier applications with multiple integrations using object oriented, APIs Microservices approaches. Strong knowledge about design patterns and development principles. Inclination and prior experience of working across SQL, Python and ETL. Strong Hands-on experience in SQL, Python (numpy, pandas, Python Frameworks, Restful APIs, MS-SQL or Oracle. Good Knowledge and experience to use Python packages such as Pandas, NumPy, etc. Cleaning up of Data, Data Wrangling, Analysis of Data, Visualization of Data, User Authorization and Authentication. Good experience in development and maintenance of code/scripts in both functional and technical specifications of all applications component, bug fixing and production support. Good knowledge on Linux/Unix environment (basic commands, shell scripting, etc.), testing phases, documentation and new framework. Some experience of working with build tools like Maven DevOps tools like Bitbucket, Git, Jenkins. Knowledge of Agile, Scrum, DevOps. Development experience in Data Engineering environment. Ability willingness to learn work on diverse technologies (languages, frameworks, and tools). Self-motivated, good interpersonal skills and inclination to constantly upgrade on new technologies and frameworks. Good communication and co-ordination skills. Nice to have Skills: Good knowledge on front-end technologies preferably Flask/Angular. Experience in Cloud Architectures. Knowledge/experience on dynatrace Knowledge/experience on No SQL databases (MongoDB, Cassandra), Kafka and Spark Some exposure to Caching technologies like Redis or Apache Ignite. Experience in Agile SCRUM and DevSecOps Exposure to Client Management or financial domain. Experience in Security topics such as IDP, SSO, IAM and related technologies. Specific Qualifications (if required) Skills Referential Behavioural Skills : (Please select up to 4 skills) Ability to synthetize / simplify Ability to collaborate / Teamwork Attention to detail / rigor Ability to deliver / Results driven Transversal Skills: (Please select up to 5 skills) Analytical Ability Ability to manage / facilitate a meeting, seminar, committee, training Ability to inspire others generate people's commitment Ability to develop and leverage networks Ability to anticipate business / strategic evolution Education Level: Bachelor Degree or equivalent Experience Level At least 3 years
Posted 1 month ago
5.0 - 6.0 years
3 - 7 Lacs
Kolkata, Pune, Chennai
Work from Office
Role & responsibilities Job Title: Developer Work Location: ~KOLKATA,WB || CHENNAI,TN || PUNE, MH Skill Required: Digital : Python~Digital : Amazon Web Service(AWS) Cloud Computing Experience Range in Required Skills: 4-6 Years Job Description: Language: Python Cloud Platform: AWS (Lambda, EC2, S3)DevOps Tools: GitLab Data / ML: NumPy, Pandas3+ years Hands experience for Developer Essential Skills: Language: Python Cloud Platform: AWS (Lambda, EC2, S3)DevOps Tools: GitLab Data / ML: NumPy, Pandas3+ years Hands experience for Developer
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Pune
Work from Office
Were Lear for You Lear, a global automotive technology leader in Seating and E-Systems, is Making every drive better by delivering intelligent in-vehicle experiences for customers around the world. With over 100 years of experience, Lear has earned a legacy of operational excellence while building its future on innovation. Our talented team is committed to creating products that ensure the comfort, well-being, convenience, and safety of consumers. Working together, we are Making every drive better. To know more about Lear please visit our career site: www.lear.com Job Title: Lead Data Engineer Function: Data Engineer Location: Bhosari, Pune Position Focus: As a Lead Data Engineer at Lear, you will take a leadership role in designing, building, and maintaining robust data pipelines within the Foundry platform. Your expertise will drive the seamless integration of data and analytics, ensuring high-quality datasets and supporting critical decision-making processes. If youre passionate about data engineering and have a track record of excellence, this role is for you! Job Description Manage Execution of Data-Focused Projects: As a senior member of the LEAR foundry team, support in designing, building and maintaining data-focused projects using Lears data analytics and application platforms. Participate in projects from conception to root cause analytics and solution deployment. Understand program and product delivery phases, contributing expert analysis across the lifecycle. Ensure Project deliverables are met as per agreed timeline. Tools and Technologies: Utilize key tools within Palantir Foundry, including: Pipeline Builder: Author data pipelines using a visual interface. Code Repositories: Manage code for data pipeline development. Data Lineage: Visualize end-to-end data flows. Leverage programmatic health checks to ensure pipeline durability. Work with both new and legacy technologies to integrate separate data feeds and transform them into new scalable datasets. Mentor junior data engineers on best practices. Data Pipeline Architecture and Development: Lead the design and implementation of complex data pipelines. Collaborate with cross-functional teams to ensure scalability, reliability, and efficiency and utilize Git concepts for version control and collaborative development. Optimize data ingestion, transformation, and enrichment processes. Big Data, Dataset Creation and Maintenance: Utilize pipeline or code repository to transform big data into manageable datasets and produce high-quality datasets that meet the organizations needs. Implement optimum build time to ensure effective utilization of resource. High-Quality Dataset Production: Produce and maintain datasets that meet organizational needs. Optimize the size and build scheduled of datasets to reflect the latest information. Implement data quality health checks and validation. Collaboration and Leadership: Work closely with data scientists, analysts, and operational teams. Provide technical guidance and foster a collaborative environment. Champion transparency and effective decision-making. Continuous Improvement: Stay abreast of industry trends and emerging technologies. Enhance pipeline performance, reliability, and maintainability. Contribute to the evolution of Foundrys data engineering capabilities. Compliance and data security: Ensure documentation and procedures align with internal practices (ITPM) and Sarbanes Oxley requirements, continuously improving them. Quality Assurance & Optimization: Optimize data pipelines and their impact on resource utilization of downstream processes. Continuously test and improve data pipeline performance and reliability. Optimize system performance for all deployed resources. analysis and to provide adequate explanation for the monthly, quarterly and yearly analysis. Oversees all accounting procedures and systems and internal controls used in the company. Supports the preparation of budgets and financial reports, including income statements, balance sheets, cash flow analysis, tax returns and reports for Government regulatory agencies. Motivates the immediate reporting staff for better performance and effective service and encourage team spirit. Coordinates with the senior and junior management of other departments as well, as every department in the organization is directly or indirectly associated with the finance department. Education: Bachelors or masters degree in computer science, Engineering, or a related field. Experience: Minimum 5 years of experience in data engineering, ETL, and data integration. Proficiency in Python and libraries like Pyspark, Pandas, Numpy. Strong understanding of Palantir Foundry and its capabilities. Familiarity with big data technologies (e.g., Spark, Hadoop, Kafka). Excellent problem-solving skills and attention to detail. Effective communication and leadership abilities.
Posted 1 month ago
1.0 - 6.0 years
3 - 5 Lacs
Mumbai
Work from Office
Associate Level 1 / Senior Associate - Data Scientist (Tabular & Text) - GM Data & AI Lab Position Purpose Your work will span across multiple areas, including predictive modelling, automation and process optimization. We use AI to discover patterns, classify information, and predict likelihoods. Our team works on building, refining, testing, and deploying these models to support various business use cases, ultimately driving business value and innovation. As a Data Scientist on our team, you can expect to work on challenging projects, collaborate with stakeholders to identify business problems, and have the opportunity to learn and grow with our team. A typical day may involve working on model development, meeting with stakeholders to discuss project requirements/updates, and brainstorming/debugging with colleagues on various technical aspects. At the Lab, we're passionate about staying at the forefront of AI research, bridging the gap between research industry to drive innovation and to make a real impact on our businesses. Responsibilities 1. Develop and maintain AI models from inception to deployment, including data collection, analysis, feature engineering, model development, evaluation, and monitoring. 2. Identify areas for model improvement through independent research and analysis, and develop recommendations for updates and enhancements. 3. Working with expert colleagues and business representatives to examine the results and keep models grounded in reality. 4. Documenting each step of the development and informing decision makers by presenting them options and results. 5. Ensure the integrity and security of data. 6. Provide support for production models delivered by the Mumbai team but potentially as well for other models to any of the Asian/EU/US time zones. Technical Behavioral Competencies 1. Qualifications: Bachelors / Master / PhD degree in Computer Science / Data Science / Mathematics / Statistics / relevant STEM field. 2. Knowledge of key concepts in Statistics and Mathematics such as Statistical methods for Machine learning, Probability Theory and Linear Algebra. 3. Experience with Machine Learning Deep Learning concepts including data representations, neural network architectures, custom loss functions. 4. Proven track record of building AI model from scratch or finetuning on large models for Tabular or/and Textual data. 5. Programming skillsin Pythonand knowledge of common numerical and machine-learning packages (like NumPy,scikit-learn, pandas,PyTorch, transformers, langchain). 6. Ability to write clear and concise code in python. 7. Intellectually curious and willing to learn challenging concepts daily. 8. Knowledge of current Machine Learning/Artificial Intelligence literature. Skills Referential Behavioural Skills : Ability to collaborate / Teamwork Critical thinking Communication skills - oral written Attention to detail / rigor Transversal Skills: Analytical Ability Education Level: Bachelor Degree or equivalent Experience Level At least 1 year
Posted 1 month ago
2.0 - 7.0 years
8 - 18 Lacs
Pune, Sonipat
Work from Office
About the Role Overview: Newton School of Technology is on a mission to transform technology education and bridge the employability gap. As Indias first impact university, we are committed to revolutionizing learning, empowering students, and shaping the future of the tech industry. Backed by renowned professionals and industry leaders, we aim to solve the employability challenge and create a lasting impact on society. We are currently looking for a Data Engineer + Associate Instructor Data Mining to join our Computer Science Department. This is a full-time academic role focused on data mining, analytics, and teaching/mentoring students in core data science and engineering topics. Key Responsibilities: Develop and deliver comprehensive and engaging lectures for the undergraduate "Data Mining", “Big Data”, and “Data Analytics” courses, covering the full syllabus from foundational concepts to advanced techniques. Instruct students on the complete data lifecycle, including data preprocessing, cleaning, transformation, and feature engineering. Teach the theory, implementation, and evaluation of a wide range of algorithms for Classification, Association rules mining, Clustering and Anomaly Detections. Design and facilitate practical lab sessions and assignments that provide students with hands-on experience using modern data tools and software. Develop and grade assessments, including assignments, projects, and examinations, that effectively measure the Course Learning Objectives (CLOs). Mentor and guide students on projects, encouraging them to work with real-world or benchmark datasets (e.g., from Kaggle). Stay current with the latest advancements, research, and industry trends in data engineering and machine learning to ensure the curriculum remains relevant and cutting-edge. Contribute to the academic and research environment of the department and the university. Required Qualifications: A Ph.D. (or a Master's degree with significant, relevant industry experience) in Computer Science, Data Science, Artificial Intelligence, or a closely related field. Demonstrable expertise in the core concepts of data engineering and machine learning as outlined in the syllabus. Strong practical proficiency in Python and its data science ecosystem, specifically Scikit-learn, Pandas, NumPy, and visualization libraries (e.g., Matplotlib, Seaborn). Proven experience in teaching, preferably at the undergraduate level, with an ability to make complex topics accessible and engaging. Excellent communication and interpersonal skills. Preferred Qualifications: A strong record of academic publications in reputable data mining, machine learning, or AI conferences/journals. Prior industry experience as a Data Scientist, Big Data Engineer, Machine Learning Engineer, or in a similar role. Experience with big data technologies (e.g., Spark, Hadoop) and/or deep learning frameworks (e.g., TensorFlow, PyTorch). Experience in mentoring student teams for data science competitions or hackathons. Perks & Benefits: Competitive salary packages aligned with industry standards. Access to state-of-the-art labs and classroom facilities. To know more about us, feel free to explore our website: Newton School of Technology We look forward to the possibility of having you join our academic team and help shape the future of tech education!
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Noida, Gurugram, Bengaluru
Work from Office
Python\Quant Engineer : Key Responsibilities: Design, develop, and maintain scalable Python-based quantitative tools and libraries. Collaborate with quants and researchers to implement and optimize pricing, risk, and trading models. Process and analyze large datasets (market, fundamental, alternative data) to support research and live trading. Build and enhance backtesting frameworks and data pipelines. Integrate models with execution systems and trading platforms. Optimize code for performance and reliability in low-latency environments. Participate in code reviews, testing, and documentation efforts. Required Qualifications: 5+ years of professional experience in quantitative development or similar roles. Proficiency in Python , including libraries like NumPy, Pandas, SciPy, Scikit-learn , and experience in object-oriented programming. Strong understanding of data structures, algorithms , and software engineering best practices. Experience working with large datasets, data ingestion, and real-time processing. Exposure to financial instruments (equities, futures, options, FX, fixed income, etc.) and financial mathematics. Familiarity with backtesting, simulation , and strategy evaluation tools. Experience with Git , Docker , CI/CD , and modern development workflows. Preferred Qualifications: Preferred Experience with C++ for performance-critical modules. Knowledge of machine learning techniques and tools (e.g., TensorFlow, XGBoost). Familiarity with SQL / NoSQL databases and cloud platforms (AWS, GCP). Prior experience in hedge funds, proprietary trading firms, investment banks , or financial data providers.
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Role & responsibilities We are seeking a skilled and passionate Python Developer with hands-on experience in Artificial Intelligence (AI), Machine Learning (ML) , and Large Language Models (LLMs) like GPT, BERT, or LLaMA. The ideal candidate will have strong expertise in building LLM-powered applications using frameworks such as LangChain , Hugging Face Transformers , or BERTopic . Key Responsibilities : Design and implement AI/ML models using Python Work on NLP pipelines: Tokenization, Topic Modeling, BERTopic Build and deploy LLM-based apps using LangChain / LangGraph Develop REST APIs using Flask or FastAPI Integrate vector databases (FAISS, Pinecone, ChromaDB) for RAG-based apps Collaborate with cross-functional teams for solution delivery
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
Hyderabad
Work from Office
Remote but the candidate has to work for the US time zone- ( Night shift ) Design, develop, test, and maintain Python applications using Django and Flask frameworks. Note: Node and Django are very strong Emil id : docs@amensys.com
Posted 1 month ago
2.0 - 7.0 years
10 - 15 Lacs
Hyderabad
Work from Office
About ValGenesis ValGenesis is a leading digital validation platform provider for life sciences companies. ValGenesis suite of products are used by 30 of the top 50 global pharmaceutical and biotech companies to achieve digital transformation, total compliance and manufacturing excellence/intelligence across their product lifecycle. Learn more about working for ValGenesis, the de facto standard for paperless validation in Life Sciences: https://www.youtube.com/watch?v=tASq7Ld0JsQ About the Role: We are seeking a highly skilled AI/ML Engineer to join our dynamic team to build the next gen applications for our global customers. If you are a technology enthusiast and highly passionate, we are eager to discuss with you about the potential role. Responsibilities Implement, and deploy Machine Learning solutions to solve complex problems and deliver real business value, ie. revenue, engagement, and customer satisfaction. Collaborate with data product managers, software engineers and SMEs to identify AI/ML opportunities for improving process efficiency. Develop production-grade ML models to enhance customer experience, content recommendation, content generation, and predictive analysis. Monitor and improve model performance via data enhancement, feature engineering, experimentation and online/offline evaluation. Stay up-to-date with the latest in machine learning and artificial intelligence, and influence AI/ML for the Life science industry. Stay up-to-date with the latest in machine learning and artificial intelligence, and influence AI/ML for the Life science industry. Requirements 2 - 4 years of experience in AI/ML engineering, with a track record of handling increasingly complex projects. Strong programming skills in Python, Rust. Experience with Pandas, NumPy, SciPy, OpenCV (for image processing) Experience with ML frameworks, such as scikit-learn, Tensorflow, PyTorch. Experience with GenAI tools, such as Langchain, LlamaIndex, and open source Vector DBs. Experience with one or more Graph DBs - Neo4J, ArangoDB Experience with MLOps platforms, such as Kubeflow or MLFlow. Expertise in one or more of the following AI/ML domains: Causal AI, Reinforcement Learning, Generative AI, NLP, Dimension Reduction, Computer Vision, Sequential Models. Expertise in building, deploying, measuring, and maintaining machine learning models to address real-world problems. Thorough understanding of software product development lifecycle, DevOps (build, continuous integration, deployment tools) and best practices. Excellent written and verbal communication skills and interpersonal skills. Advanced degree in Computer Science, Machine Learning or related field. We’re on a Mission In 2005, we disrupted the life sciences industry by introducing the world’s first digital validation lifecycle management system. ValGenesis VLMS® revolutionized compliance-based corporate validation activities and has remained the industry standard. Today, we continue to push the boundaries of innovation enhancing and expanding our portfolio beyond validation with an end-to-end digital transformation platform. We combine our purpose-built systems with world-class consulting services to help every facet of GxP meet evolving regulations and quality expectations. The Team You’ll Join Our customers’ success is our success. We keep the customer experience centered in our decisions, from product to marketing to sales to services to support. Life sciences companies exist to improve humanity’s quality of life, and we honor that mission. We work together. We communicate openly, support each other without reservation, and never hesitate to wear multiple hats to get the job done. We think big. Innovation is the heart of ValGenesis. That spirit drives product development as well as personal growth. We never stop aiming upward. We’re in it to win it. We’re on a path to becoming the number one intelligent validation platform in the market, and we won’t settle for anything less than being a market leader. How We Work Our Chennai, Hyderabad and Bangalore offices are onsite, 5 days per week. We believe that in-person interaction and collaboration fosters creativity, and a sense of community, and is critical to our future success as a company. ValGenesis is an equal-opportunity employer that makes employment decisions on the basis of merit. Our goal is to have the best-qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristics protected by local law.
Posted 1 month ago
3.0 - 6.0 years
7 - 16 Lacs
Gurugram, Bengaluru
Work from Office
Tasks and Responsibilities: Design and maintain high-quality technical solutions for Risk and Marketing data while collaborating with stakeholders to address emerging business challenges. Identify and resolve data/process issues, anticipate trends and highlight the shortcomings in the data processing steps Develop SAS codes using advanced SAS to execute marketing and risk fraud strategies Develop SQL scripts to be executed during production deployments on SQL server Develop Python notebooks for User Acceptance Testing and System Integration Testing Day to day task include creating python notebooks for data validations, data analysis and data visualization, creating SAS jobs using advanced SAS concepts, creating automated SQL scripts Create python scripts to automate manual data processing tasks, combine multiple code snippets into single python notebooks Required Skills and Qualifications: 3-5 years of experience as SAS, SQL and Python developer with strong hands on experience Good hands-on experience in SAS programming. SAS Certified associates preferred. In-depth knowledge of Python software development, including frameworks, tools, and systems (NumPy, Pandas, SciPy, PyTorch, etc.) Hands on experience in writing SQL queries to fetch data from Microsoft SQL Server Excellent analytical and problem-solving skills Excellent communication skills and client management skills Good to have experience in base SAS/Intermediate SAS
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39815 Jobs | Dublin
Wipro
19317 Jobs | Bengaluru
Accenture in India
15105 Jobs | Dublin 2
EY
14860 Jobs | London
Uplers
11139 Jobs | Ahmedabad
Amazon
10431 Jobs | Seattle,WA
IBM
9214 Jobs | Armonk
Oracle
9174 Jobs | Redwood City
Accenture services Pvt Ltd
7676 Jobs |
Capgemini
7672 Jobs | Paris,France