Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Area(s) of responsibility Job Description Key Responsibilities: Design, develop, and maintain scalable, efficient, and reliable systems to support GenAI and machine learning-based applications and use cases Lead the development of data pipelines, architectures, and tools to support data-intensive projects, ensuring high performance, security, and compliance Collaborate with other stakeholders to integrate AI and ML models into production-ready systems Work closely with non-backend expert counterparts, such as data scientists and ML engineers, to ensure seamless integration of AI and ML models into backend systems Ensure high-quality code, following best practices, and adhering to industry standards and company guidelines Hard Requirements Senior backend engineer with a proven track record of owning the backend portion of projects Experience collaborating with product, project, and domain team members Strong understanding of data pipelines, architectures, and tools Proficiency in Python (ability to read, write and debug Python code with minimal guidance) Mandatory Skills Machine Learning: experience with machine learning frameworks, such as scikit-learn, TensorFlow, or PyTorch Python: proficiency in Python programming, with experience working with libraries and frameworks, such as NumPy, pandas, and Flask Natural Language Processing: experience with NLP techniques, such as text processing, sentiment analysis, and topic modeling Deep Learning: experience with deep learning frameworks, such as TensorFlow, or PyTorch Data Science: experience working with data science tools Backend: experience with backend development, including design, development, and deployment of scalable and modular systems Artificial Intelligence: experience with AI concepts, including computer vision, robotics, and expert systems Pattern Recognition: experience with pattern recognition techniques, such as clustering, classification, and regression Statistical Modeling: experience with statistical modeling, including hypothesis testing, confidence intervals, and regression analysis
Posted 15 hours ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Area(s) of responsibility Job Description Key Responsibilities: Design, develop, and maintain scalable, efficient, and reliable systems to support GenAI and machine learning-based applications and use cases Lead the development of data pipelines, architectures, and tools to support data-intensive projects, ensuring high performance, security, and compliance Collaborate with other stakeholders to integrate AI and ML models into production-ready systems Work closely with non-backend expert counterparts, such as data scientists and ML engineers, to ensure seamless integration of AI and ML models into backend systems Ensure high-quality code, following best practices, and adhering to industry standards and company guidelines Hard Requirements Senior backend engineer with a proven track record of owning the backend portion of projects Experience collaborating with product, project, and domain team members Strong understanding of data pipelines, architectures, and tools Proficiency in Python (ability to read, write and debug Python code with minimal guidance) Mandatory Skills Machine Learning: experience with machine learning frameworks, such as scikit-learn, TensorFlow, or PyTorch Python: proficiency in Python programming, with experience working with libraries and frameworks, such as NumPy, pandas, and Flask Natural Language Processing: experience with NLP techniques, such as text processing, sentiment analysis, and topic modeling Deep Learning: experience with deep learning frameworks, such as TensorFlow, or PyTorch Data Science: experience working with data science tools Backend: experience with backend development, including design, development, and deployment of scalable and modular systems Artificial Intelligence: experience with AI concepts, including computer vision, robotics, and expert systems Pattern Recognition: experience with pattern recognition techniques, such as clustering, classification, and regression Statistical Modeling: experience with statistical modeling, including hypothesis testing, confidence intervals, and regression analysis
Posted 15 hours ago
2.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
Role & responsibilities *Position type* : full-time - Mon to Sat *Position* : Data Science / DA Trainer *Exp:- Mini 2 years *Location* : CG Road, Ahmedabad *Shift Timings* : 8.30 Hours *Technical Skills* : Good understanding of the built-in data types especially lists, dictionaries, tuples and sets. Mastery of N-dimensional NumPy arrays. Mastery of pandas data frames. Pandas Data Frames++ - Python Packages, Pandas, - Data Cleansing, Reading Data, Parsing and Saving Data MS Excel & Power Bi Tableau - Tableau Interfaces, Connecting to Discourses, Data Types of Tableau - Descriptive Analytics, Diagnostic Analytics, Predictive Analytics, Prescriptive Analytics Excel - IFERROR, LOOKUP, PIVOT TABLE - COUNT, COUNTIF, COUNTIFS SUM, SUMIF AVERAGE, AVERAGEIF, AVERAGEIFS Machine Learning Deep Learning *Job Description:* * Conducting Live sessions as per the curriculum defined. * Assisting and guiding students through unique project based learning to complete their technology based project on time. * Ready to upgrade his/her technology to deliver students as per the business need. * Able to deliver other responsibilities as per business requirement. *Eligibility:* * Basic Qualification should be Graduation/ Masters from any technical Stream. * Good Communication in any language (English, Hindi, Gujarati ) * Good presentation, explanation and organizational Skills.
Posted 15 hours ago
2.0 years
0 - 0 Lacs
Kalighat, Kolkata, West Bengal
On-site
We are seeking a highly analytical and technically skilled Data Analyst with hands-on experience in Machine Learning to join our team. The ideal candidate will be responsible for analyzing large datasets, generating actionable insights, and building ML models to drive business solutions and innovation. Key Responsibilities: Collect, clean, and analyze structured and unstructured data from multiple sources. Develop dashboards, visualizations, and reports to communicate trends and insights to stakeholders. Identify business challenges and apply machine learning algorithms to solve them. Build, evaluate, and deploy predictive and classification models using tools like Python, R, Scikit-learn, TensorFlow, etc. Collaborate with cross-functional teams including product, marketing, and engineering to implement data-driven strategies. Optimize models for performance, accuracy, and scalability. Automate data processing and reporting workflows using scripting and cloud-based tools. Stay updated with the latest industry trends in data analytics and machine learning. Required Skills & Qualifications: Bachelor's or Master's degree in Computer Science, Data Science, Statistics, Mathematics, or related field . 2+ years of experience in data analytics and machine learning. Strong proficiency in SQL , Python (Pandas, NumPy, Scikit-learn), and data visualization tools like Tableau, Power BI , or Matplotlib/Seaborn . Experience with machine learning techniques such as regression, classification, clustering, NLP, and recommendation systems. Solid understanding of statistics, probability, and data mining concepts. Familiarity with cloud platforms like AWS, GCP, or Azure is a plus. Excellent problem-solving and communication skills. Job Types: Full-time, Permanent Pay: ₹10,000.00 - ₹15,000.00 per month Ability to commute/relocate: Kalighat, Kolkata, West Bengal: Reliably commute or planning to relocate before starting work (Preferred) Language: English (Preferred) Work Location: In person
Posted 15 hours ago
5.0 years
0 Lacs
India
On-site
Coursera was launched in 2012 by Andrew Ng and Daphne Koller with a mission to provide universal access to world-class learning. It is now one of the largest online learning platforms in the world, with 183 million registered learners as of June 30, 2025 . Coursera partners with over 350 leading university and industry partners to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera’s platform innovations enable instructors to deliver scalable, personalized, and verified learning experiences to their learners. Institutions worldwide rely on Coursera to upskill and reskill their employees, citizens, and students in high-demand fields such as GenAI, data science, technology, and business. Coursera is a Delaware public benefit corporation and a B Corp. Join us in our mission to create a world where anyone, anywhere can transform their life through access to education. We're seeking talented individuals who share our passion and drive to revolutionize the way the world learns. At Coursera, we are committed to building a globally diverse team and are thrilled to extend employment opportunities to individuals in any country where we have a legal entity. We require candidates to possess eligible working rights and have a compatible timezone overlap with their team to facilitate seamless collaboration. Coursera has a commitment to enabling flexibility and workspace choices for employees. Our interviews and onboarding are entirely virtual, providing a smooth and efficient experience for our candidates. As an employee, we enable you to select your main way of working, whether it's from home, one of our offices or hubs, or a co-working space near you. Job Overview: We are seeking a highly skilled and collaborative Data Scientist to join our team. Reporting to the Director of Data Science, you will work alongside and provide technical guidance to a subset of our Analytics and Insights group and provide support to a few business lines including Industry and University Partnerships, and Content/Credentials supporting Product, Marketing, Content, Finance, Services, and more. As a Data Scientist, you will influence strategies and roadmaps for business units within your purview through actionable insights. Your responsibilities will include forecasting content performance, informing content acquisition and prescribing improvements, addressing A/B testing setups and reporting, answering ad-hoc business questions, defining metrics and goals, building and managing dashboards, causal inference and ML modeling, supporting business event tracking and unification, and more. The ideal candidate will be a creative and collaborative Data Scientist who can proactively drive results in their areas of focus, and provide guidance and best practices around statistical modeling and experimentation, data analysis, and data quality. Responsibilities: As a Data Scientist, you will assume responsibility for guiding the planning, assessment, and execution of our content initiatives and the subsequent engagement and learner and student success. You will play a key role in identifying gaps in content and proposing ways to acquire new, targeted content or enhance existing content, in addition to creating and leveraging forecasts of content pre and post launch, defining KPIs and creating reports for measuring the impact of diverse tests, product improvements, and content releases. In this position, you will guide other Data Scientists and provide technical feedback throughout various stages of project development, including optimizing analytical models, reviewing code, creating dashboards, and conducting experimentation. You will examine and prioritize projects and provide leadership with insightful feedback on results. Your role will require that you analyze the connection between our consumer-focused business and our learner’s pathway to a degree, and optimize systems to effectively lead to the optimal outcome for our learners. In this position, you will collaborate closely with the data engineering and operations teams to ensure that we have the right content at the right time for our learners. You will also analyze prospect behavior to identify content acquisition and optimization opportunities that promote global growth in the Consumer and Degrees business, and assist content, marketing, and product managers in developing and measuring growth initiatives; resulting in more lives changed through learning. In this role, you’ll be directly involved in the planning, measurement, and evaluation of content , engagement, and customer success initiatives and experiments. Proactively identify gaps in content availability and recommend targeted content acquisition or improvement to existing content. Define and develop KPIs and create reports to measure impact of various tests, content releases, product improvements, etc. Mentor Data Scientists and offer technical guidance on projects; dashboard creation, troubleshooting code, statistical modeling, experimentation, and general analysis optimization. Run exploratory analyses, uncover new areas of opportunity, create and test hypotheses, develop dashboards, and assess the potential upside of a given opportunity. Work closely with our data engineering and operations teams to ensure that content funnel tracking supports product needs and maintain self-serve reporting tools. Advise content, marketing, and product managers on experimentation and measurement plans for key growth initiatives Basic Qualifications: Background in applied math, computer science, statistics, or a related technical field 5+ years of experience using data to advise product or business teams 2+ years of experience applying statistical inference techniques to business questions Excellent business intuition, cross-functional communication, and project management Strong applied statistics and data visualization skills Proficient with at least one scripting language (e.g. Python), one statistical software package (e.g. R, NumPy/SciPy/Pandas), and SQL Preferred Qualifications: Experience at EdTech or Content Subscription business Experience partnering with SaaS sales and/or marketing organizations Experience working with Salesforce and/or Marketo data Experience with Airflow, Databricks and/or LookerExperience with Amplitude If this opportunity interests you, you might like these courses on Coursera: Go Beyond the Numbers: Translate Data into Insights Applied AI with DeepLearning Probability & Statistics for Machine Learning & Data Science Coursera is an Equal Employment Opportunity Employer and considers all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, age, marital status, national origin, protected veteran status, disability, or any other legally protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, please contact us at accommodations@coursera.org. For California Candidates, please review our CCPA Applicant Notice here. For our Global Candidates, please review our GDPR Recruitment Notice here.
Posted 16 hours ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Hands-on data automation engineer with strong Python or Java coding skills and solid SQL expertise, who can work with large data sets, understand stored procedures, and independently write data-driven automation logic. Develop and execute test cases with a focus on Fixed Income trading workflows. The requirement goes beyond automation tools and aligns better with a junior developer or data automation role. Desired Skills and experience :- Strong programming experience in Python (preferred) or Java. Strong experience of working with Python and its libraries like Pandas, NumPy, etc. Hands-on experience with SQL, including: Writing and debugging complex queries (joins, aggregations, filtering, etc.) Understanding stored procedures and using them in automation Experience working with data structures, large tables and datasets Comfort with data manipulation, validation, and building comparison scripts Nice to have: Familiarity with PyCharm, VS Code, or IntelliJ for development and understanding of how automation integrates into CI/CD pipelines Prior exposure to financial data or post-trade systems (a bonus) Excellent communication skills, both written and verbal Experience of working with test management tools (e.g., X-Ray/JIRA). Extremely strong organizational and analytical skills with strong attention to detail Strong track record of excellent results delivered to internal and external clients Able to work independently without the need for close supervision and collaboratively as part of cross-team efforts Experience with delivering projects within an agile environment Key Responsibilities :- Write custom data validation scripts based on provided regression test cases Read, understand, and translate stored procedure logic into test automation Compare datasets across environments and generate diffs Collaborate with team members and follow structured automation practices Contribute to building and maintaining a central automation script repository Establish and implement comprehensive QA strategies and test plans from scratch. Develop and execute test cases with a focus on Fixed Income trading workflows. Driving the creation of regression test suites for critical back-office applications. Collaborate with development, business analysts, and project managers to ensure quality throughout the SDLC. Provide clear and concise reporting on QA progress and metrics to management. Bring strong subject matter expertise in the Financial Services Industry, particularly fixed income trading products and workflows. Ensure effective, efficient, and continuous communication (written and verbally) with global stakeholders Independently troubleshoot difficult and complex issues on different environments Responsible for end-to-end delivery of projects, coordination between client and internal offshore teams and managing client queries Demonstrate high attention to detail, should work in a dynamic environment whilst maintaining high quality standards, a natural aptitude to develop good internal working relationships and a flexible work ethic Responsible for Quality Checks and adhering to the agreed Service Level Agreement (SLA) / Turn Around Time (TAT
Posted 16 hours ago
3.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Description About Velsera Medicine moves too slow. At Velsera, we are changing that. Velsera was formed in 2023 through the shared vision of Seven Bridges and Pierian, with a mission to accelerate the discovery, development, and delivery of life-changing insights. Velsera provides software and professional services for: AI-powered multimodal data harmonization and analytics for drug discovery and development IVD development, validation, and regulatory approval Clinical NGS interpretation, reporting, and adoption With our headquarters in Boston, MA, we are growing and expanding our teams located in different countries! What will you do? Development: Write clean, efficient, and well-documented Python code to meet project requirements API Development: Develop RESTful APIs and integrate third-party APIs when necessary. Testing: Write unit tests and integration tests to ensure code quality and functionality. Collaboration: Work closely with cross-functional teams to implement new features and improve existing ones. Code Review: Participate in peer code reviews and provide constructive feedback to team members. Maintenance: Troubleshoot, debug, and maintain existing codebase to improve performance and scalability. Work proactively to identify the tech debt items and come with solution to address the same Documentation: Maintain detailed and accurate documentation for code, processes, and design. Continuous Improvement: Stay up-to-date with the latest Python libraries, frameworks, and industry best practices. Requirements What do you bring to the table? Experience: 3+ years of experience in Python development. ● Technical Skills: Proficiency in Python 3.x and familiarity with popular Python libraries (e.g., NumPy, pandas, Flask, boto3). Experience in developing lambda functions. Strong understanding of RESTful web services and APIs. Familiarity with relational databases (e.g., PostgreSQL) and NoSQL databases (e.g., MongoDB). Knowledge of version control systems (e.g., Git). Experience with Docker and containerization Experience with AWS services such as ECR , Batch jobs, step functions, cloud watch etc Experience with Jenkins is a plus ● Problem-Solving Skills: Strong analytical and debugging skills, with the ability to troubleshoot complex issues. ● Soft Skills: Strong written and verbal communication skills. Ability to work independently as well as collaboratively in a team environment. Detail-oriented with the ability to manage multiple tasks and priorities. Preferred Skills: Experience working in the healthcare or life sciences domain Strong understanding of application security and OWASP best practices Hands-on experience with serverless architectures (e.g., AWS Lambda) Proven experience in mentoring junior developers and conducting code reviews Benefits Flexible Work & Time Off - Embrace hybrid work models and enjoy the freedom of unlimited paid time off to support work-life balance. Health & Well-being - Access comprehensive group medical and life insurance coverage, along with a 24/7 Employee Assistance Program (EAP) for mental health and wellness support. Growth & Learning - Fuel your professional journey with continuous learning and development programs designed to help you upskill and grow. Recognition & Rewards - Get recognized for your contributions through structured reward programs and campaigns. Engaging & Fun Work Culture - Experience a vibrant workplace with team events, celebrations, and engaging activities that make every workday enjoyable. & Many More...
Posted 16 hours ago
5.0 - 8.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Python GEN AI Engineer: Proven experience ( 5 to 8 years) as a Python Developer or similar role, with a strong portfolio of Python-based projects and applications. Proficiency in Python programming language and its standard libraries, frameworks, and tools such as Numpy, Scipy, Pandas, Dask, spaCy, NLTK, sci-kit-learn and PyTorch Must Have experience in GEN AI(Generative AI) Experience REST API libraries and frameworks such as Django, Flask, SQLAlchemy. Solid understanding of object-oriented programming ( OOP ) principles, data structures, and algorithms. Experience with database design, SQL , and ORM frameworks (e.g., SQLAlchemy, Django ORM). Familiarity with front-end technologies such as HTML, CSS, JavaScript , and client-side frameworks (e.g., React, Angular , Vue.js). Knowledge of version control systems (e.g., Git ) and collaborative development workflows (e.g., GitHub, GitLab). Strong analytical and problem-solving skills, with a keen attention to detail and a passion for continuous improvement. Excellent communication and interpersonal skills, with the ability to collaborate effectively in a team environment and communicate technical concepts to non-technical stakeholders.
Posted 17 hours ago
0 years
0 Lacs
India
Remote
Role: Support Specialist L3 Location: India About the Operations Team : Includes the activities, processes and practices involved in managing and maintaining the operational aspects of an organization’s IT infrastructure and systems. It focuses on ensuring the smooth and reliable operation of IT services, infrastructure components and supporting systems in the Data & Analytics area. Duties Description: Provide expert service support as L3 specialist for the service. Identify, analyze, and develop solutions for complex incidents or problems raised by stakeholders and clients as needed. Analyze issues and develop tools and/or solutions that will help enable business continuity and mitigate business impact. Proactive and timely update assigned tasks, provide response and solution within agreed team's timelines. Problem corrective action plan proposals. Deploying bug-fixes in managed applications. Gather requirements, analyze, design and implement complex visualization solutions Participate in internal knowledge sharing, collaboration activities, and service improvement initiatives. Tasks may include monitoring, incident/problem resolution, documentations, automation, assessment and implementation/deployment of change requests. Provide technical feedback and mentoring to teammates Requirements: Willing to work either ASIA, EMEA, or NALA shift Strong problem-solving, analytical, and critical thinking skills. Strong communication skillset – ability to translate technical details to business/non-technical stakeholders Extensive experience with SQL, T-SQL, PL/SQL Language – includes but not limited to ETL, merge, partition exchange, exception and error handling, performance tuning. Experience with Python/Pyspark mainly with Pandas, Numpy, Pathlib and PySpark SQL Functions Experience with Azure Fundamentals, particularly Azure Blob Storage (File Systems and AzCopy). Experience with Azure Data Services - Databricks and Data Factory Understands the operation of ETL process, triggers and scheduler Logging, dbutils, pyspark SQL functions, handling different files e.g. json Experience with Git repository maintenance and DevOps concepts. Familiarity with building, testing, and deploying process. Nice to have: Experience with Control-M (if no experience, required to learn on the job) KNIME Power BI Willing to be cross-trained to all of the technologies involved in the solution We offer: Stable employment. On the market since 2008, 1300+ talents currently on board in 7 global sites. “Office as an option” model. You can choose to work remotely or in the office. Flexibility regarding working hours and your preferred form of contract. Comprehensive online onboarding program with a “Buddy” from day 1. Cooperation with top-tier engineers and experts. Unlimited access to the Udemy learning platform from day 1. Certificate training programs. Lingarians earn 500+ technology certificates yearly. Upskilling support. Capability development programs, Competency Centers, knowledge sharing sessions, community webinars, 110+ training opportunities yearly. Grow as we grow as a company. 76% of our managers are internal promotions. A diverse, inclusive, and values-driven community. Autonomy to choose the way you work. We trust your ideas. Create our community together. Refer your friends to receive bonuses. Activities to support your well-being and health. Plenty of opportunities to donate to charities and support the environment. If you are interested in this position, please apply on the link given below. Application Link
Posted 17 hours ago
50.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Client :- Our client is a French multinational information technology (IT) services and consulting company, headquartered in Paris, France. Founded in 1967, It has been a leader in business transformation for over 50 years, leveraging technology to address a wide range of business needs, from strategy and design to managing operations. The company is committed to unleashing human energy through technology for an inclusive and sustainable future, helping organizations accelerate their transition to a digital and sustainable world. They provide a variety of services, including consulting, technology, professional, and outsourcing services. Job Details :- Position: Data Analyst - AI& Bedrock Experience Required: 6-10yrs Notice: immediate Work Location: Pune Mode Of Work: Hybrid Type of Hiring: Contract to Hire Job Description:- FAS - Data Analyst - AI & Bedrock Specialization About Us: We are seeking a highly experienced and visionary Data Analyst with a deep understanding of artificial intelligence (AI) principles and hands-on expertise with cutting-edge tools like Amazon Bedrock. This role is pivotal in transforming complex datasets into actionable insights, enabling data-driven innovation across our organization. Role Summary: The Lead Data Analyst, AI & Bedrock Specialization, will be responsible for spearheading advanced data analytics initiatives, leveraging AI and generative AI capabilities, particularly with Amazon Bedrock. With 5+ years of experience, you will lead the design, development, and implementation of sophisticated analytical models, provide strategic insights to stakeholders, and mentor a team of data professionals. This role requires a blend of strong technical skills, business acumen, and a passion for pushing the boundaries of data analysis with AI. Key Responsibilities: • Strategic Data Analysis & Insight Generation: o End-to-end data analysis projects, from defining business problems to delivering actionable insights that influence strategic decisions. o Utilize advanced statistical methods, machine learning techniques, and AI-driven approaches to uncover complex patterns and trends in large, diverse datasets. o Develop and maintain comprehensive dashboards and reports, translating complex data into clear, compelling visualizations and narratives for executive and functional teams. • AI/ML & Generative AI Implementation (Bedrock Focus): o Implement data analytical solutions leveraging Amazon Bedrock, including selecting appropriate foundation models (e.g., Amazon Titan, Anthropic Claude) for specific use cases (text generation, summarization, complex data analysis). o Design and optimize prompts for Large Language Models (LLMs) to extract meaningful insights from unstructured and semi-structured data within Bedrock. o Explore and integrate other AI/ML services (e.g., Amazon SageMaker, Amazon Q) to enhance data processing, analysis, and automation workflows. o Contribute to the development of AI-powered agents and intelligent systems for automated data analysis and anomaly detection. • Data Governance & Quality Assurance: o Ensure the accuracy, integrity, and reliability of data used for analysis. o Develop and implement robust data cleaning, validation, and transformation processes. o Establish best practices for data management, security, and governance in collaboration with data engineering teams. • Technical Leadership & Mentorship: o Evaluate and recommend new data tools, technologies, and methodologies to enhance analytical capabilities. o Collaborate with cross-functional teams, including product, engineering, and business units, to understand requirements and deliver data-driven solutions. • Research & Innovation: o Stay abreast of the latest advancements in AI, machine learning, and data analytics trends, particularly concerning generative AI and cloud-based AI services. o Proactively identify opportunities to apply emerging technologies to solve complex business challenges. Required Skills & Qualifications: • Bachelor's or Master's degree in Computer Science, Data Science, Statistics, Mathematics, Engineering, or a related quantitative field. • 5+ years of progressive experience as a Data Analyst, Business Intelligence Analyst, or similar role, with a strong portfolio of successful data-driven projects. • Proven hands-on experience with AI/ML concepts and tools, with a specific focus on Generative AI and Large Language Models (LLMs). • Demonstrable experience with Amazon Bedrock is essential, including knowledge of its foundation models, prompt engineering, and ability to build AI-powered applications. • Expert-level proficiency in SQL for data extraction and manipulation from various databases (relational, NoSQL). • Advanced proficiency in Python (Pandas, NumPy, Scikit-learn, etc.) or R for data analysis, statistical modeling, and scripting. • Strong experience with data visualization tools such as Tableau, Power BI, Qlik Sense, or similar, with a focus on creating insightful and interactive dashboards. • Experience with cloud platforms (AWS preferred) and related data services (e.g., S3, Redshift, Glue, Athena). • Excellent analytical, problem-solving, and critical thinking skills. • Strong communication and presentation skills, with the ability to convey complex technical findings to non-technical stakeholders. • Ability to work independently and collaboratively in a fast-paced, evolving environment. Preferred Qualifications: • Experience with other generative AI frameworks or platforms (e.g., OpenAI, Google Cloud AI). • Familiarity with data warehousing concepts and ETL/ELT processes. • Knowledge of big data technologies (e.g., Spark, Hadoop). • Experience with MLOps practices for deploying and managing AI/ML models. Learn about building AI agents with Bedrock and Knowledge Bases to understand how these tools revolutionize data analysis and customer service.
Posted 19 hours ago
1.0 - 7.0 years
0 Lacs
maharashtra
On-site
We are seeking an experienced AI Data Analyst with over 7 years of professional experience, showcasing leadership in tech projects. The ideal candidate will possess a strong proficiency in Python, Machine Learning, AI APIs, and Large Language Models (LLMs). You will have the opportunity to work on cutting-edge AI solutions, including vector-based search and data-driven business insights. Your experience should include: - At least 2 years of hands-on experience as a Data Analyst. - Practical experience of at least 1 year with AI systems such as LLMs, AI APIs, or vector-based search. - 2+ years of experience working with Machine Learning models and solutions. - Strong background of 5+ years in Python programming. - Exposure to vector databases like pgvector and ChromaDB is considered a plus. Key Responsibilities: - Conduct data exploration, profiling, and cleaning on large datasets. - Design, implement, and evaluate machine learning and AI models to address business problems. - Utilize LLM APIs, foundation models, and vector databases to support AI-driven analysis. - Construct end-to-end ML workflows starting from data preprocessing to deployment. - Develop visualizations and dashboards for internal reports and presentations. - Analyze and interpret model outputs, providing actionable insights to stakeholders. - Collaborate with engineering and product teams to implement AI solutions across business processes. Required Skills: Data Analysis: - Work with real-world datasets hands-on for at least 1 year. - Proficiency in Exploratory Data Analysis (EDA), data wrangling, and visualization using tools like Pandas, Seaborn, or Plotly. Machine Learning & AI: - Apply machine learning techniques for at least 2 years (classification, regression, clustering, etc.). - Hands-on experience with AI technologies such as Generative AI, LLMs, AI APIs (e.g., OpenAI, Hugging Face), and vector-based search systems. - Knowledge of model evaluation, hyperparameter tuning, and model selection. - Exposure to AI-driven analysis, including RAG (Retrieval-Augmented Generation) and other AI solution architectures. Programming: - Proficiency in Python programming for at least 3 years, with expertise in libraries like scikit-learn, NumPy, Pandas, etc. - Strong understanding of data structures and algorithms relevant to AI and ML. Tools & Technologies: - Proficiency in SQL/PostgreSQL. - Familiarity with vector databases like pgvector, ChromaDB. - Exposure to LLMs, foundation models, RAG systems, and embedding techniques. - Familiarity with cloud platforms such as AWS, SageMaker, or similar. - Knowledge of version control systems (e.g., Git), REST APIs, and Linux. Good to Have: - Experience with tools like Scrapy, SpaCy, or OpenCV. - Knowledge of MLOps, model deployment, and CI/CD pipelines. - Familiarity with deep learning frameworks like PyTorch or TensorFlow. Soft Skills: - Possess a strong problem-solving mindset and analytical thinking. - Excellent communication skills, able to convey technical information clearly to non-technical stakeholders. - Collaborative, proactive, and self-driven in a fast-paced, dynamic environment. If you meet the above requirements and are eager to contribute to a dynamic team, share your resume with kajal.uklekar@arrkgroup.com. We look forward to welcoming you to our team in Mahape, Navi Mumbai for a hybrid work arrangement. Immediate joiners are preferred.,
Posted 20 hours ago
0.0 - 4.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Scientist Intern at Evoastra Ventures Pvt. Ltd., you will have the opportunity to work with real-world datasets and gain valuable industry exposure to accelerate your entry into the data science domain. Evoastra Ventures is a research-first data and AI solutions company that focuses on delivering value through predictive analytics, market intelligence, and technology consulting. Our goal is to empower businesses by transforming raw data into strategic decisions. In this role, you will be responsible for performing data cleaning, preprocessing, and transformation, as well as conducting exploratory data analysis (EDA) to identify trends. You will also assist in the development and evaluation of machine learning models and contribute to reports and visual dashboards summarizing key insights. Additionally, you will document workflows, collaborate with team members on project deliverables, and participate in regular project check-ins and mentorship discussions. To excel in this role, you should have a basic knowledge of Python, statistics, and machine learning concepts, along with good analytical and problem-solving skills. You should also be willing to learn and adapt in a remote, team-based environment, possess strong communication and time-management skills, and have access to a laptop with a stable internet connection. Throughout the internship, you will gain a Verified Internship Certificate, a Letter of Recommendation based on your performance, real-time mentorship from professionals in data science and analytics, project-based learning opportunities with portfolio-ready outputs, and priority consideration for future paid internships or full-time roles at Evoastra. You will also be recognized in our internship alumni community. If you meet the eligibility criteria and are eager to build your career foundation with hands-on data science projects that make an impact, we encourage you to submit your resume via our internship application form at www.evoastra.in. Selected candidates will receive an onboarding email with further steps. Please note that this internship is fully remote and unpaid.,
Posted 20 hours ago
2.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
As a Senior Data Scientist with our fast-growing team, you should possess a total of 7-8 years of experience, with a specific focus on 3-5 years in Machine & Deep Machine learning. Your expertise should include working with Convolution Neural Network (CNN), Image Analytics, TensorFlow, Open CV, among others. Your primary responsibilities will revolve around designing and developing highly scalable machine learning solutions that have a significant impact on various aspects of our business. You will play a crucial role in creating Neural Network solutions, particularly Convolution Neural Networks, and ML solutions based on our architecture supported by big data, cloud technology, micro-service architecture, and high-performing compute infrastructure. Your daily tasks will involve contributing to all stages of algorithm development, from ideation to design, prototyping, and production implementation. To excel in this role, you should have a solid foundation in software engineering and data science, along with a deep understanding of machine learning algorithms, statistical analysis tools, and distributed systems. Experience in developing machine learning applications, familiarity with various machine learning APIs, tools, and open source libraries, as well as proficiency in coding, data structures, predictive modeling, and big data concepts are essential. Additionally, expertise in designing full-stack ML solutions in a distributed compute environment is crucial. Proficiency in Python, Tensor Flow, Keras, Sci-kit, pandas, NumPy, AZURE, AWS GPU is required. Strong communication skills to effectively collaborate with various levels of the organization are also necessary. If you are a Junior Data Scientist looking to join our team, you should have 2-4 years of experience and hands-on experience in Deep Learning, Computer Vision, Image Processing, and related skills. We are seeking self-motivated individuals who are eager to tackle challenges in the realm of AI predictive image analytics and machine learning.,
Posted 20 hours ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As a Data Visualization Engineer at Zoetis, Inc., you will be an integral part of the pharmaceutical R&D team, contributing to the development and implementation of cutting-edge visualizations that drive decision-making in drug discovery, development, and clinical research. You will collaborate closely with scientists, analysts, and other stakeholders to transform complex datasets into impactful visual narratives that provide key insights and support strategic initiatives. Your responsibilities will include designing and developing a variety of visualizations, ranging from interactive dashboards to static reports, to summarize key insights from high-throughput screening, clinical trial data, and other R&D datasets. You will work on implementing visual representations for pathway analysis, pharmacokinetics, omics data, and time-series trends, utilizing advanced visualization techniques and tools to create compelling visuals tailored to technical and non-technical audiences. Collaboration with cross-functional teams will be a key aspect of your role, as you partner with data scientists, bioinformaticians, pharmacologists, and clinical researchers to identify visualization needs and translate scientific data into actionable insights. Additionally, you will be responsible for maintaining and optimizing visualization tools, building reusable components, and evaluating emerging technologies to support large-scale data analysis. Staying updated on the latest trends in visualization technology and methods relevant to pharmaceutical research will be essential, as you apply advanced techniques such as 3D molecular visualization, network graphs, and predictive modeling visuals. You will also collaborate across the full spectrum of R&D functions, aligning technology solutions with the diverse needs of scientific disciplines and development pipelines. In terms of qualifications, you should possess a Bachelor's or Master's degree in Computer Science, Data Science, Bioinformatics, or a related field. Experience in the pharmaceutical or biotech sectors is considered a strong advantage. Proficiency in visualization tools such as Tableau, Power BI, and programming languages like Python, R, or JavaScript is required. Familiarity with data handling tools, omics and network tools, as well as dashboarding and 3D visualization tools, will also be beneficial. Soft skills such as strong storytelling ability, effective communication, collaboration with interdisciplinary teams, and analytical thinking are crucial for success in this role. Travel requirements for this position are minimal, ranging from 0-10%. Join us at Zoetis India Capability Center (ZICC) in Hyderabad, and be part of our journey to pioneer innovation and drive the future of animal healthcare.,
Posted 20 hours ago
3.0 - 10.0 years
0 Lacs
telangana
On-site
The U.S. Pharmacopeial Convention (USP) is an independent scientific organization collaborating with top health and science authorities to develop quality standards for medicines, dietary supplements, and food ingredients. With over 1,300 professionals across twenty global locations, USP's core value of Passion for Quality drives its mission to strengthen the supply of safe, quality medicines and supplements worldwide. USP values inclusivity and fosters opportunities for mentorship and professional growth. Emphasizing Diversity, Equity, Inclusion, and Belonging, USP aims to build a world where quality in health and healthcare is assured. The Digital & Innovation group at USP is seeking a Data Scientist proficient in advanced analytics, data visualization, and machine learning to work on innovative projects and deliver digital solutions. The ideal candidate will leverage data insights to create a unified experience across USP's ecosystem. In this role, you will contribute to USP's public health mission by increasing access to high-quality, safe medicine and improving global health through public standards and programs. Collaborate with data scientists, engineers, and IT teams to ensure project success, apply ML techniques for business impact, and communicate results effectively to diverse audiences. **Requirements:** **Education:** Bachelor's degree in relevant field (e.g., Engineering, Analytics, Data Science, Computer Science, Statistics) or equivalent experience. **Experience:** - Data Scientist: 3-6 years hands-on experience in data science, machine learning, statistics, and natural language processing. - Senior Data Scientist: 6-10 years hands-on experience in data science and advanced analytics. - Proficiency in Python packages and visualization tools, SQL, and CNN/RNN models. - Experience with data extraction, XML documents, and DOM model. **Additional Preferences:** - Master's degree in relevant fields. - Experience in scientific chemistry or life sciences. - Familiarity with pharmaceutical datasets and nomenclature. - Ability to translate stakeholder needs into technical outputs. - Strong communication skills and ability to explain technical issues to non-technical audiences. **Supervisory Responsibilities:** Non-supervisory position **Benefits:** USP offers comprehensive benefits for personal and financial well-being, including healthcare options and retirement savings. USP does not accept unsolicited resumes from third-party recruitment agencies. Job Type: Full-Time.,
Posted 20 hours ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Senior Data Scientist I at Dotdash Meredith, you will collaborate with the business team to understand problems, objectives, and desired outcomes. Your primary responsibility will be to work with cross-functional teams to assess data science use cases & solutions, lead and execute end-to-end data science projects, and collaborate with stakeholders to ensure alignment of data solutions with business goals. You will be expected to build custom data models with an initial focus on content classification, utilize advanced machine learning techniques to improve model accuracy and performance, and build necessary visualizations to interpret data models by business teams. Additionally, you will work closely with the engineering team to integrate models into production systems, monitor model performance in production, and make improvements as necessary. To excel in this role, you must possess a Masters degree (or equivalent experience) in Data Science, Mathematics, Statistics, or a related field with 3+ years of experience in ML/Data Science/Predictive-Analytics. Strong programming skills in Python and experience with standard data science tools and libraries are essential. Experience or understanding of deploying machine learning models in production on at least one cloud platform is required, and hands-on experience with LLMs API and the ability to craft effective prompts are preferred. It would be beneficial to have experience in the Media domain, familiarity with vector databases like Milvus, and E-commerce or taxonomy classification experience. In this role, you will have the opportunity to learn about building ML models using industry-standard frameworks, solving Data Science problems for the media industry, and the use of Gen AI in Media. This position is based in Eco World, Bengaluru, with shift timings from 1 p.m. to 10 p.m. IST. If you are a bright, engaged, creative, and fun individual with a passion for data science, we invite you to join our inspiring team at Dotdash Meredith India Services Pvt. Ltd.,
Posted 20 hours ago
48.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
We are seeking a highly analytical Quantitative Research Analyst to join our AIF Quant Fund team. The ideal candidate brings strong buy-side experience in systematic investment strategies and will play a key role in developing, implementing, and maintaining sophisticated quantitative models that drive our alternative investment approach. About The Role The Quantitative Research Analyst will be responsible for various key responsibilities that include model development, data infrastructure management, strategy testing, risk management, and technology : Model Development & Research Design and build multi-factor models for equity, fixed income, and alternative asset classes. Develop alpha generation signals and systematic trading strategies across multiple time horizons. Research and implement new quantitative factors using academic literature and market insights. Enhance existing models through continuous performance monitoring and iterative improvements. Data Infrastructure & Analytics Manage large-scale financial datasets using Snowflake, SQL, and cloud-based platforms. Build automated data pipelines for real-time and historical market data processing. Ensure data quality, integrity, and optimize query performance for research workflows. Develop efficient storage solutions for multi-asset research environments. Strategy Testing & Validation Conduct comprehensive back testing across multiple market cycles using robust statistical methods. Perform out-of-sample testing, walk-forward analysis, and Monte Carlo simulations. Generate detailed performance attribution and risk decomposition analysis. Document model assumptions, limitations, and validation results. Risk Management & Monitoring Build risk management frameworks including VaR, stress testing, and scenario analysis. Monitor portfolio exposures, concentration risks, and factor loadings in real-time. Develop automated alerting systems for model degradation and performance anomalies. Support portfolio optimization and construction processes. Technology & Automation Develop Python-based research and production systems with focus on scalability. Create automated model monitoring, reporting, and alert generation frameworks. Collaborate on technology infrastructure decisions and platform evaluations. Maintain code quality and documentation standards. Qualifications Professional Experience : 48 years of buy-side quantitative research in asset management, hedge funds, or proprietary trading. Proven track record in systematic investment strategy development and implementation. Experience with institutional-grade quantitative research and portfolio management. Technical Proficiency Programming : Advanced Python (pandas, numpy, scipy, scikit-learn, quantitative libraries). Database : Hands-on Snowflake and SQL experience with large-scale data environments. Analytics : Statistical modeling, econometrics, and machine learning techniques. Platforms : Bloomberg Terminal, Refinitiv, or equivalent financial data systems. Quantitative Expertise Deep understanding of factor models, portfolio optimization, and systematic risk management. Knowledge of derivatives pricing, fixed income analytics, and alternative investment structures. Experience with market microstructure analysis and high-frequency data processing. Familiarity with performance attribution methodologies and benchmark & Analysis : Strong problem-solving abilities with exceptional attention to detail. Ability to translate quantitative insights into actionable investment recommendations. Excellent presentation skills for communicating complex research to stakeholders. Collaborative approach to working in cross-functional investment teams. Educational Background Master's degree in Finance, Economics, Mathematics, Statistics, Physics, or Engineering. CQF, CFA, FRM or equivalent professional certification preferred. Strong academic foundation with demonstrated quantitative aptitude. Regulatory Awareness Understanding of SEBI AIF regulations and compliance frameworks. Knowledge of investment management risk controls and regulatory reporting requirements. Preferred Skills Industry Recognition : Published quantitative research or contributions to investment thought leadership. Multi-Asset Expertise : Experience across equity, fixed income, commodities, and alternative investments. Innovation Mindset : Interest in machine learning, alternative data, and emerging quantitative techniques. Advanced Programming : Proficiency in additional languages such as R, C++, or Julia; experience with version control (Git) and code optimization techniques. Domain Specialization : Strong background in specific asset classes such as Indian equities & emerging markets. Entrepreneurial Drive : Self-motivated individual comfortable building scalable systems from ground-up in a growing AIF technology environment. Industry Certifications : Additional qualifications or specialized quantitative finance credentials will be a plus. Alternative Data & AI : Experience with NLP and AI techniques for extracting investment signals from alternative text data sources (such as Filings, Analyst Reports and Transcripts) and developing reasoning-based AI models for systematic decision-making will be a plus. Pay range and compensation package : Competitive with industry standards, including performance-based incentives. (ref:hirist.tech)
Posted 21 hours ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
You have 2-4 years of experience and can join within Immediate to 30 days. As a Python Developer, you will play a key role in developing and maintaining efficient server-side applications. You will need to optimize code for performance and scalability, collaborate with front-end developers for seamless integration, and work with Pandas and Numpy for data processing. Additionally, deploying applications and ensuring performance monitoring will be part of your responsibilities. Your proficiency in Python and related frameworks like Django, Flask, or FastAPI is crucial for this role. You should also have knowledge of ORM libraries and database systems, along with familiarity in JavaScript, HTML, and CSS. Strong debugging and problem-solving skills are essential for success in this position. Experience in automation and CI/CD pipelines would be a plus. If you are passionate about Python development and enjoy working in a dynamic team, we would like to hear from you.,
Posted 21 hours ago
4.0 - 8.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Machine Learning Engineer, you will play a key role in developing and enhancing a Telecom Artificial Intelligence Product. This role requires a strong background in machine learning and deep learning, along with extensive experience in implementing advanced algorithms and models to solve complex problems. You will be working on cutting-edge technologies to develop solutions for anomaly detection, forecasting, event correlation, and fraud detection. Your responsibilities will include developing production-ready implementations of proposed solutions using various machine learning and deep learning algorithms. You will test these solutions on live customer data to ensure efficacy and robustness. Additionally, you will research and test novel machine learning approaches for large-scale distributed computing applications. In this role, you will be responsible for implementing and managing the full machine learning operations lifecycle using tools such as Kubeflow, MLflow, AutoML, and Kserve for model deployment. You will develop and deploy machine learning models using PyTorch and TensorFlow to ensure high performance and scalability. Furthermore, you will run and manage PySpark and Kafka on distributed systems with large-scale, non-linear network elements. To excel in this position, you should be proficient in Python programming and experienced with machine learning libraries such as Scikit-Learn and NumPy. Experience in time series analysis, data mining, text mining, and creating data architectures will be beneficial. You should also be able to utilize batch processing and incremental approaches to manage and analyze large datasets. As a Machine Learning Engineer, you will experiment with multiple algorithms, optimizing hyperparameters to identify the best-performing models. You will execute machine learning algorithms in cloud environments, leveraging cloud resources effectively. Continuous feedback gathering, model retraining, and updating will be essential to maintain and improve model performance. Moreover, you should have expertise in network characteristics, transformer architectures, GAN AI techniques, and end-to-end machine learning projects. Experience with leading supervised and unsupervised machine learning methods and familiarity with Python packages like Pandas, Numpy, and DL frameworks like Keras, TensorFlow, PyTorch are required. Knowledge of Big Data tools and environments, as well as MySQL/NoSQL databases, will be advantageous. You will collaborate with cross-functional teams of data scientists, software engineers, and stakeholders to integrate implemented systems into the SaaS platform. Your innovative thinking and creative ideas will contribute to improving the overall platform. Additionally, you will create use cases specific to the domain to solve business problems effectively. Ideally, you should have a Bachelor's degree in Science/IT/Computing or equivalent with at least 4 years of experience in a QA Engineering role. Strong quantitative and applied mathematical skills are essential, along with certification courses in Data Science/ML. In-depth knowledge of statistical techniques, machine learning techniques, and experience with Telecom Product development are preferred. Experience in MLOps is a plus for deploying developed models, and familiarity with scalable SaaS platforms is advantageous for this role.,
Posted 21 hours ago
5.0 - 9.0 years
0 Lacs
salem, tamil nadu
On-site
This is a key position that will play a pivotal role in creating data-driven technology solutions to establish our client as a leader in healthcare, financial, and clinical administration. As the Lead Data Scientist, you will be instrumental in building and implementing machine learning models and predictive analytics solutions that will spearhead the new era of AI-driven innovation in the healthcare industry. Your responsibilities will involve developing and implementing a variety of ML/AI products, from conceptualization to production, to help the organization gain a competitive edge in the market. Working closely with the Director of Data Science, you will operate at the crossroads of healthcare, finance, and cutting-edge data science to tackle some of the most intricate challenges faced by the industry. This role presents a unique opportunity within VHT's Product Transformation division to create pioneering machine learning capabilities from scratch. You will have the chance to shape the future of VHT's data science & analytics foundation, utilizing state-of-the-art tools and methodologies within a collaborative and innovation-focused environment. Key Responsibilities: - Lead the development of predictive machine learning models for Revenue Cycle Management analytics, focusing on areas such as: - Claim Denials Prediction: identifying high-risk claims before submission - Cash Flow Forecasting: predicting revenue timing and patterns - Patient-Related Models: enhancing patient financial experience and outcomes - Claim Processing Time Prediction: optimizing workflow and resource allocation - Explore emerging areas and integration opportunities, e.g., denial prediction + appeal success probability or prior authorization prediction + approval likelihood models. VHT Technical Environment: - Cloud Platform: AWS (SageMaker, S3, Redshift, EC2) - Development Tools: Jupyter Notebooks, Git, Docker - Programming: Python, SQL, R (optional) - ML/AI Stack: Scikit-learn, TensorFlow/PyTorch, MLflow, Airflow - Data Processing: Spark, Pandas, NumPy - Visualization: Matplotlib, Seaborn, Plotly, Tableau Required Qualifications: - Advanced degree in Data Science, Statistics, Computer Science, Mathematics, or a related quantitative field - 5+ years of hands-on data science experience with a proven track record of deploying ML models to production - Expert-level proficiency in SQL and Python, with extensive experience using standard Python machine learning libraries (scikit-learn, pandas, numpy, matplotlib, seaborn, etc.) - Cloud platform experience, preferably AWS, with hands-on knowledge of SageMaker, S3, Redshift, and Jupyter Notebook workbenches (other cloud environments acceptable) - Strong statistical modeling and machine learning expertise across supervised and unsupervised learning techniques - Experience with model deployment, monitoring, and MLOps practices - Excellent communication skills with the ability to translate complex technical concepts to non-technical stakeholders Preferred Qualifications: - US Healthcare industry experience, particularly in Health Insurance and/or Medical Revenue Cycle Management - Experience with healthcare data standards (HL7, FHIR, X12 EDI) - Knowledge of healthcare regulations (HIPAA, compliance requirements) - Experience with deep learning frameworks (TensorFlow, PyTorch) - Familiarity with real-time streaming data processing - Previous leadership or mentoring experience,
Posted 22 hours ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
You are an innovation technology company focused on helping companies accelerate their digital initiatives from strategy and planning through execution. Leveraging deep technical expertise, Agile methodologies, and data-driven intelligence, you modernize systems of engagement and simplify human/tech interaction. Amazing things happen in environments where everyone feels a true sense of belonging and has the skills and opportunities to succeed. Investing in talent and supporting career growth is a priority, always looking for amazing talent to contribute to growth by delivering top results for clients. Join the team to challenge yourself and accomplish meaningful work. As a highly experienced Computer Vision Architect with deep expertise in Python, you will design and lead the development of cutting-edge vision-based systems. Architecting scalable solutions leveraging advanced image and video processing, deep learning, and real-time inference, you will collaborate with cross-functional teams to deliver high-performance, production-grade computer vision platforms. Key Responsibilities: - Architect and design end-to-end computer vision solutions for real-world applications like object detection, tracking, OCR, facial recognition, and scene understanding. - Lead R&D initiatives and prototype development using modern CV frameworks such as OpenCV, PyTorch, TensorFlow. - Optimize computer vision models for performance, scalability, and deployment on cloud, edge, or embedded systems. - Define architecture standards and best practices for Python-based CV pipelines. - Collaborate with product teams, data scientists, and ML engineers to translate business requirements into technical solutions. - Stay updated with the latest advancements in computer vision, deep learning, and AI. - Mentor junior developers and contribute to code reviews, design discussions, and technical documentation. Required Skills & Qualifications: - Bachelors or Masters degree in Computer Science, Electrical Engineering, or related field (PhD is a plus). - 8+ years of software development experience, with 5+ years in computer vision and deep learning. - Proficiency in Python and libraries such as OpenCV, NumPy, scikit-image, Pillow. - Experience with deep learning frameworks like PyTorch, TensorFlow, or Keras. - Strong understanding of CNNs, object detection (YOLO, SSD, Faster R-CNN), semantic segmentation, and image classification. - Knowledge of MLOps, model deployment strategies (e.g., ONNX, TensorRT), and containerization (Docker/Kubernetes). - Experience working with video analytics, image annotation tools, and large-scale dataset pipelines. - Familiarity with edge deployment (Jetson, Raspberry Pi, etc.) or cloud AI services (AWS SageMaker, Azure ML, GCP AI).,
Posted 22 hours ago
2.0 - 6.0 years
0 Lacs
surat, gujarat
On-site
As an experienced professional with over 2 years of experience, you will be responsible for designing, developing, and deploying Machine Learning / Artificial Intelligent models to address real-world challenges within our software products. Your role will involve collaborating with product managers, developers, and data engineers to establish AI project objectives and specifications. Additionally, you will be tasked with cleaning, processing, and analyzing extensive datasets to uncover valuable patterns and insights. Your expertise will be utilized in implementing and refining models using frameworks like TensorFlow, PyTorch, or Scikit-learn. You will also play a crucial role in creating APIs and services to seamlessly integrate AI models into production environments. Monitoring model performance and conducting retraining as necessary to uphold accuracy and efficiency will be part of your regular duties. It is essential to keep yourself abreast of the latest developments in AI/ML and assess their relevance to our projects. To excel in this role, you should hold a Bachelor's or Master's degree in Computer Science, Data Science, AI/ML, or a related discipline. A strong grasp of machine learning algorithms, including supervised, unsupervised, and reinforcement learning, is imperative. Proficiency in Python and ML libraries such as NumPy, pandas, TensorFlow, Keras, and PyTorch is required. Familiarity with NLP, computer vision, or time-series analysis will be advantageous. Experience with model deployment tools and cloud platforms like AWS, GCP, or Azure is preferred. Knowledge of software engineering practices, encompassing version control (Git), testing, and CI/CD, is also essential. Candidates with prior experience in a product-based or tech-driven startup environment, exposure to deep learning, recommendation systems, or predictive analytics, and an understanding of ethical AI practices and model interpretability will be highly regarded. This is a full-time position requiring you to work during day shifts at our in-person work location.,
Posted 23 hours ago
9.0 - 13.0 years
0 Lacs
chennai, tamil nadu
On-site
You are seeking a Lead - Python Developer / Tech Lead to take charge of backend development and oversee a team handling enterprise-grade, data-driven applications. In this role, you will have the opportunity to work with cutting-edge technologies such as FastAPI, Apache Spark, and Lakehouse architectures. Your responsibilities will include leading the team, making technical decisions, and ensuring timely project delivery in a dynamic work environment. Your primary duties will involve mentoring and guiding a group of Python developers, managing task assignments, maintaining code quality, and overseeing technical delivery. You will be responsible for designing and implementing scalable RESTful APIs using Python and FastAPI, as well as managing extensive data processing tasks using Pandas, NumPy, and Apache Spark. Additionally, you will drive the implementation of Lakehouse architectures and data pipelines, conduct code reviews, enforce coding best practices, and promote clean, testable code. Collaboration with cross-functional teams, including DevOps and Data Engineering, will be essential. Furthermore, you will be expected to contribute to CI/CD processes, operate in Linux-based environments, and potentially work with Kubernetes or MLOps tools. To excel in this role, you should possess 9-12 years of total experience in software development, with a strong command of Python, FastAPI, and contemporary backend frameworks. A profound understanding of data engineering workflows, Spark, and distributed systems is crucial. Experience in leading agile teams or fulfilling a tech lead position is beneficial. Proficiency in unit testing, Linux, and working in cloud/data environments is required, while exposure to Kubernetes, ML Pipelines, or MLOps would be advantageous.,
Posted 23 hours ago
0.0 - 3.0 years
0 Lacs
guwahati, assam
On-site
You will be a Machine Learning Engineer responsible for assisting in the development and deployment of machine learning models and data systems. This entry-level position offers an opportunity to apply your technical skills to real-world challenges and collaborate within a team environment. Your responsibilities will include: - Assisting in the design, training, and optimization of machine learning models. - Supporting the development of scalable data pipelines for machine learning workflows. - Conducting exploratory data analysis and data preprocessing tasks. - Collaborating with senior engineers and data scientists to implement solutions. - Testing and validating machine learning models for accuracy and efficiency. - Documenting workflows, processes, and key learnings. You should possess: - A Bachelor's or Master's degree in Computer Science, Information Technology, or a related field. - Basic proficiency in Python and familiarity with libraries such as NumPy, Pandas, Scikit-learn, and Tensorflow. - Knowledge of SQL and fundamental machine learning concepts and algorithms. - Exposure to cloud platforms like AWS, GCP, or Azure would be advantageous. Additionally, you should have: - 0-1 years of experience in data engineering or related fields. - Strong analytical skills and the ability to troubleshoot complex issues. - Leadership skills to guide junior team members and contribute to team success. Preferred qualifications include: - A Bachelor's degree in Computer Science, Information Technology, or a related field. - Proficiency in Scikit-learn, Pytorch, and Tensorflow. - Basic understanding of containerization tools like Docker. - Exposure to data visualization tools or frameworks. Your key performance indicators will involve demonstrating progress in applying machine learning concepts and successfully completing tasks within specified timelines.,
Posted 1 day ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
As a Data Engineer at our company, you will be responsible for building and maintaining scalable data pipelines and ETL processes using Python and related technologies. Your primary focus will be on developing efficient data pipelines to handle large volumes of data and optimize processing times. Additionally, you will collaborate closely with our team of data scientists and engineers at Matrix Space. To qualify for this role, you should have 2-5 years of experience in data engineering or a related field, with a strong proficiency in Python programming. You must be well-versed in libraries such as Pandas, NumPy, and SQL Alchemy, and have hands-on experience with data engineering tools like Apache Airflow, Luigi, or similar frameworks. A working knowledge of SQL and experience with relational databases such as PostgreSQL or MySQL is also required. In addition to technical skills, we are looking for candidates with strong problem-solving abilities who can work both independently and as part of a team. Effective communication skills are essential, as you will be required to explain technical concepts to non-technical stakeholders. The ability to complete tasks efficiently and effectively is a key trait we value in potential candidates. If you are an immediate joiner and can start within a week, we encourage you to apply for this opportunity. Join our team and be a part of our exciting projects in data engineering.,
Posted 1 day ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough