Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Lead - AI Engineer at Xerago, you will play a crucial role in driving our mission for customer immortality. Your responsibilities will include managing a team of AI engineers, developing high-quality AI models with strong predictive accuracy, and staying updated on the latest technologies in the field. By collaborating closely with project stakeholders, you will shape AI strategies, oversee timelines, and contribute significantly to the success and growth of our organization. Your influence will extend to our digital impact initiatives, shaping the future of technology at Xerago. In terms of technical responsibilities, you will be responsible for designing, developing, and implementing AI models using Python, TensorFlow, or PyTorch to ensure robustness and accuracy. Collaboration with the data team to clean, manipulate, and analyze large datasets using NumPy, Pandas, or similar libraries will be a key aspect of your role. Additionally, you will oversee the deployment of AI solutions on cloud platforms such as AWS, Google Cloud, or Microsoft Azure to optimize performance and efficiency. On the managerial front, you will lead and mentor a team of AI engineers, providing technical guidance and fostering a culture of continuous learning. Coordination with project stakeholders to define AI strategies, manage timelines, and ensure the successful delivery of AI-driven solutions will also fall within your purview. To qualify for this role, you should hold a Bachelor's degree in Computer Science, Information Technology, Machine Learning, or a related field with a strong focus on AI and machine learning. Expertise in Python programming languages, deep learning frameworks like TensorFlow or PyTorch, and data manipulation libraries such as NumPy and Pandas is essential. Proficiency in cloud platforms such as AWS, Google Cloud, or Microsoft Azure is a prerequisite, while familiarity with data visualization tools (e.g., Matplotlib, Seaborn) and version control systems (e.g., Git) is desired. Although not mandatory, certifications such as Google Professional Machine Learning Engineer, AWS Certified Machine Learning Specialty, Microsoft Certified: Azure AI Engineer Associate, and TensorFlow Developer Certificate by Deeplearning.ai are considered valuable. Strong communication, leadership, team management, adaptability, problem-solving, and strategic thinking abilities are crucial for success in this role. This is a full-time position with a work schedule of Monday to Friday, requiring in-person presence at the designated work location.,
Posted 1 week ago
0 years
0 Lacs
India
On-site
Job Summary: We are seeking a talented and driven Machine Learning Engineer to design, build, and deploy ML models that solve complex business problems and enhance decision-making capabilities. You will work closely with data scientists, engineers, and product teams to develop scalable machine learning pipelines, deploy models into production, and continuously improve their performance. Key Responsibilities: Design, develop, and deploy machine learning models for classification, regression, clustering, recommendation, NLP, or computer vision tasks. Collaborate with data scientists to prepare and preprocess large-scale datasets for training and evaluation. Implement and optimize machine learning pipelines and workflows using tools like MLflow, Airflow, or Kubeflow. Integrate models into production environments and ensure model performance, monitoring, and retraining. Conduct A/B testing and performance evaluations to validate model accuracy and business impact. Stay up-to-date with the latest advancements in ML/AI research and tools. Write clean, efficient, and well-documented code for reproducibility and scalability. Requirements: Bachelor’s or Master’s degree in Computer Science, Engineering, Mathematics, or a related field. Strong knowledge of machine learning algorithms, data structures, and statistical methods. Proficient in Python and ML libraries/frameworks (e.g., scikit-learn, TensorFlow, PyTorch, XGBoost). Experience with data manipulation libraries (e.g., pandas, NumPy) and visualization tools (e.g., Matplotlib, Seaborn). Familiarity with cloud platforms (AWS, GCP, or Azure) and model deployment tools. Experience with version control systems (Git) and software engineering best practices. Preferred Qualifications: Experience in deep learning, natural language processing (NLP), or computer vision. Knowledge of big data technologies like Spark, Hadoop, or Hive. Exposure to containerization (Docker), orchestration (Kubernetes), and CI/CD pipelines. Familiarity with MLOps practices and tools.
Posted 1 week ago
6.0 - 8.0 years
15 - 27 Lacs
Bengaluru
Work from Office
Job Summary We are seeking a Senior Data Engineer to join our growing data team, where you will help build and scale the data infrastructure powering analytics, machine learning, and product innovation. As a Senior Data Engineer, you will be responsible for designing, building, and optimizing robust, scalable, and secure data pipelines and platforms. You will work closely with data scientists, software engineers, and product teams to deliver clean, reliable data for critical business and clinical applications. Key Responsibilities: Design, implement, and optimize complex data pipelines using advanced SQL, ETL tools, and integration technologies. Collaborate with cross-functional teams to implement optimal data solutions for advanced analytics and data science initiatives. Spearhead process improvements, including automation, data delivery optimization, and infrastructure redesign for scalability. Evaluate and recommend emerging data technologies to build comprehensive data integration strategies. Lead technical discovery processes, defining complex requirements and mapping out detailed scenarios. • Develop and maintain data governance policies and procedures. What Youll Need to Be Successful (Required Skills): 5 -7 years of experience in data engineering or related roles. Advanced proficiency in multiple programming languages (e.g., Python, Java, Scala) and expert-level SQL knowledge. Extensive experience with big data technologies (Hadoop ecosystem, Spark, Kafka) and cloudbased environments (Azure, AWS, or GCP). Proven experience in designing and implementing large-scale data warehousing solutions. Deep understanding of data modeling techniques and enterprise-grade ETL tools. • Demonstrated ability to solve complex analytical problems. Education/ Certifications: Bachelor's degree in computer science, Information Management or related field . Preferred Skills: Experience in the healthcare industry, including clinical, financial, and operational data. Knowledge of machine learning and AI technologies and their data requirements. Familiarity with data visualization tools and real-time data processing. Understanding data privacy regulations and experience implementing compliant solutions Note: We work 5days from Office - India regular shift. Netsmart, India has setup our new Global Capability Centre(GCC) at Godrej Centre, Byatarayanapura (Hebbal area) -(https://maps.app.goo.gl/RviymAeGSvKZESSo6) .
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Comcast brings together the best in media and technology. We drive innovation to create the world's best entertainment and online experiences. As a Fortune 50 leader, we set the pace in a variety of innovative and fascinating businesses and create career opportunities across a wide range of locations and disciplines. We are at the forefront of change and move at an amazing pace, thanks to our remarkable people, who bring cutting-edge products and services to life for millions of customers every day. If you share in our passion for teamwork, our vision to revolutionize industries and our goal to lead the future in media and technology, we want you to fast-forward your career at Comcast. Job Summary Responsible for planning and designing new software and web applications. Edits new and existing applications. Implements, testing and debugging defined software components. Documents all development activity. Works with moderate guidance in own area of knowledge. Job Description Core Responsibilities Job Title Python Engineer 2 Job Summary: As part of the SPIDER team, the Python Engineer will be responsible for building multi-tier client-server applications, data processing, deployment on cloud-based technologies, and continuous improvement of existing solutions. Responsibilities: Develop and deploy applications on cloud-based environments following the full lifecycle of software development. Maintaining and continuous improvement of existing applications and solutions. Writing quality and efficient code following coding and security principles. Implement solutions based on project requirements and technical specifications. Identify technology and design issues and provide proactive communication. Minimum Requirements: Degree in Computer Science, or equivalent professional experience. 3 years of experience in Python Programming. 3+ years of relevant experience with Application development and deployment in cloud. Solid understanding of REST API's development and integration. Work experience with Python libraries such as Pandas, SciPy, NumPy, etc. Working experience in any one of the SQL or NoSQL databases. Experience with application deployment on cloud-bases environments such as AWS, etc. Experience with version control, Git preferred. Knowledge in OOP and modular application development and documentation. Good Problem Solving and debugging skills. Possess the ability to learn and work independently, along with strong communication, and a strong work ethic. Good to have: Experience with different distributions of Linux. Experience in Spark. Experience with container technologies such as Docker. Experience with CICD tools such as Concourse, Terraform. Disclaimer: This information has been designed to indicate the general nature and level of work performed by employees in this role. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities and qualifications. Comcast is proud to be an equal opportunity workplace. We will consider all qualified applicants for employment without regard to race, color, religion, age, sex, sexual orientation, gender identity, national origin, disability, veteran status, genetic information, or any other basis protected by applicable law. Base pay is one part of the Total Rewards that Comcast provides to compensate and recognize employees for their work. Most sales positions are eligible for a Commission under the terms of an applicable plan, while most non-sales positions are eligible for a Bonus. Additionally, Comcast provides best-in-class Benefits to eligible employees. We believe that benefits should connect you to the support you need when it matters most, and should help you care for those who matter most. That’s why we provide an array of options, expert guidance and always-on tools, that are personalized to meet the needs of your reality – to help support you physically, financially and emotionally through the big milestones and in your everyday life. Please visit the compensation and benefits summary on our careers site for more details. Education Bachelor's Degree While possessing the stated degree is preferred, Comcast also may consider applicants who hold some combination of coursework and experience, or who have extensive related professional experience. Relevant Work Experience 2-5 Years
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Mandatory Skills 4-6 years of exp with basic proficiency in Python, SQL and familiarity with libraries like NumPy or Pandas. Understanding of fundamental programming concepts (data structures, algorithms, etc.). Eagerness to learn new tools and frameworks, including Generative AI technologies. Familiarity with version control systems (e.g., Git). Strong problem-solving skills and attention to detail. Exposure to data processing tools like Apache Spark or PySpark, SQL. Basic understanding of APIs and how to integrate them. Interest in AI/ML and willingness to explore frameworks like LangChain. Familiarity with cloud platforms (AWS, Azure, or GCP) is a plus Job Description We are seeking a motivated Python Developer to join our team. The ideal candidate will have a foundational understanding of Python programming, SQL and a passion for learning and growing in the field of software development. You will work closely with senior developers and contribute to building and maintaining applications, with opportunities to explore Generative AI frameworks and data processing tools. Key Responsibilities Assist in developing and maintaining Python-based applications. Write clean, efficient, and well-documented code. Collaborate with senior developers to integrate APIs and frameworks. Support data processing tasks using libraries like Pandas or PySpark. Learn and work with Generative AI frameworks (e.g., LangChain, LangGraph) under guidance. Debug and troubleshoot issues in existing applications.
Posted 1 week ago
3.0 - 5.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Job Summary Synechron is seeking a proactive and skilled Generative AI Testing & Prompt Engineer to join our innovative technology team. In this role, you will design and execute prompt strategies, develop automation workflows, and evaluate AI outputs to ensure high-quality, unbiased, and reliable generative models. Your expertise will help improve AI performance, safety, and ethical standards, enabling the organization to leverage cutting-edge AI capabilities effectively and responsibly. Software Requirements Required: Python (including libraries such as NumPy, Pandas, and testing frameworks) Bash or shell scripting for automation workflows Version control tools (Git) AI evaluation metrics tools and frameworks for measuring quality, bias, and safety Preferred: Experience with AI/ML frameworks such as TensorFlow or PyTorch CI/CD automation tools for seamless testing and deployment workflows Data analysis tools and platforms for metrics development Overall Responsibilities Develop and refine prompts to evaluate the capabilities, accuracy, creativity, and safety of Generative AI models, including NLP and multimodal models. Design comprehensive testing strategies across diverse scenarios using prompt variations to assess performance, bias, and ethical considerations. Automate testing workflows to streamline data collection, prompt execution, output analysis, and reporting. Analyze and interpret AI outputs, providing insights for improvements and bias mitigation. Collaborate closely with AI data scientists and developers to tune prompt techniques and enhance model behavior through automation and feedback. Document prompt scripts, testing procedures, automation workflows, and test results for transparency and reproducibility. Continuously refine prompt engineering and testing methodologies based on latest AI advancements and industry best practices. Stay informed about new trends, tools, and ethical considerations related to generative AI, prompt engineering, and automation. Technical Skills (By Category) Programming Languages: Required: Python (advanced scripting and automation capabilities) Preferred: Bash, shell scripting, or other automation languages Databases/Data Management: Basic understanding of data storage and querying for managing testing datasets Experience with data labeling and quality control processes Cloud Technologies: Preferred: Basic familiarity with cloud platforms (AWS, Azure, GCP) for AI deployment and testing Frameworks and Libraries: Required: Generative AI evaluation tools, scripting frameworks for automation, prompt management tools Preferred: TensorFlow, PyTorch, or similar AI/ML libraries Development Tools and Methodologies: Required: Version control (Git), scripting for automation, reporting tools Preferred: CI/CD pipelines for automated testing and deployment in cloud or on-prem environments Security Protocols: Awareness of data privacy, security standards, and bias mitigation techniques Experience Requirements 3-5 years of professional experience in testing generative AI models, including prompt writing and optimization. Proven experience designing prompt strategies for NLP and multimodal models. Hands-on experience automating evaluation workflows for AI outputs, bias detection, and safety assessment. Familiarity with model evaluation metrics, including relevance, coherence, bias, and safety. Industry experience in AI development, testing, or related research preferred; alternative experience with related automation and data analysis roles may be considered. Day-to-Day Activities Develop, test, and optimize user prompts for AI model evaluations across multiple scenarios. Automate processes for prompt execution, data collection, and output assessment. Monitor and analyze AI model responses for quality, relevance, bias, and ethical compliance. Collaborate with data scientists, AI developers, and product teams to implement prompt tuning and automation improvements. Document testing strategies, automation scripts, and test outcomes thoroughly. Provide actionable feedback to improve model behavior and address bias or safety issues. Review industry trends and incorporate new testing tools or methodologies to advance testing capabilities. Qualifications Bachelors degree in Computer Science, Data Science, AI, or related field. Relevant certifications in AI, data analysis, or automation practices are a plus. Demonstrated commitment to continuous learning of AI ethics, bias mitigation, and prompt engineering. Professional Competencies Critical thinking and analytical skills to interpret complex AI outputs and identify issues. Problem-solving abilities to develop effective prompts, troubleshoot automation workflows, and improve testing strategies. Strong communication skills for documenting processes and collaborating with multidisciplinary teams. Ability to adapt rapidly to evolving AI technologies and industry standards. Interpersonal skills for stakeholder engagement and feedback provision. Effective time management to handle multiple test scenarios and documentation tasks efficiently.
Posted 1 week ago
2.0 - 5.0 years
4 - 9 Lacs
Chennai
Work from Office
We are looking for a skilled Python Developer with expertise in Pandas, NumPy, FastAPI, to join our dynamic team. The ideal candidate should have hands-on experience in Python development.
Posted 1 week ago
2.0 - 5.0 years
4 - 8 Lacs
Hyderabad, Banjarahills
Work from Office
Python Developer responsibilities include writing and testing code, debugging, and integrating third-party web services. The role involves developing web applications with Django and building RESTful APIs using Django REST Framework. Experience with server-side logic and teamwork is essential. Analyzing, and validating a variety of input data sources; identifying the relationship among source data objects. Performing data transformation and manipulation using pandas and numpy. Experience / knowledge of API Calling. Knowledge of different databases. Developing common definitions of sourced data; defining reusable approach to data integration. Designing efficient approach to processing third party data using Python. Designing efficient approach to analyzing data-at-scale using Python. Basic knowledge of SQL and PL/SQL. Should have knowledge of some database technology like Oracle. Some knowledge of BI tool would be great. Understanding of AWS would be a plus. Advanced SQL/ Oracle or some other data base an added advantage. Java spring boot as secondary skills.
Posted 1 week ago
0 years
1 Lacs
Ahmedabad, Gujarat, India
On-site
About I Vision Infotech I Vision Infotech is a leading IT company in India offering innovative and cost-effective web and mobile solutions. Since 2011, we've served global clients across the USA, UK, Australia, Malaysia, and Canada. Our core services include Web Development, E-commerce, and Mobile App Development for Android, iOS, and cross-platform technologies. About the Training Program: Python Job-Oriented Training This Paid Python Job-Oriented Training Program is designed for students, freshers, and career switchers who want to build a strong career in Python Development, Data Analysis, or Backend Web Development. You’ll gain hands-on experience by working on real-time industry projects and learning from experienced professionals. What You Will Learn Core Python Programming Python Basics (Data Types, Loops, Conditions, Functions) Object-Oriented Programming (OOP) File Handling, Error Handling, Modules Web Development (Python + Django / Flask) Web App Development with Django Template Integration (HTML/CSS) REST API Development CRUD Operations & Authentication Deployment on Hosting Platforms Database Integration MySQL / SQLite / PostgreSQL ORM (Object Relational Mapping) in Django Data Analysis (Optional Track) Pandas, NumPy, Matplotlib Basic Data Cleaning & Visualization Tools & Technologies Python 3.x, Django / Flask Git & GitHub Postman (for API Testing) VS Code / PyCharm Cloud Deployment (optional) Training Highlights 100% Practical & Hands-on Learning Real-Time Projects & Assignments Git & Version Control Exposure Internship Completion Certificate (3 Months) Resume + GitHub Portfolio Support Interview Preparation & Placement Assistance Eligibility Criteria BCA / MCA / B.Sc IT / M.Sc IT / Diploma / B.E / B.Tech No prior experience required – just basic computer knowledge Strong interest in Python programming / backend / data Important Notes This is a Paid Training Program with a job-oriented structure. Training Fees to be paid before the batch starts. Limited seats strictly First-Come, First-Served. Only for serious learners who want to build a tech career. No time-pass inquiries, please. Skills:- Python
Posted 1 week ago
8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Job Overview: Responsible for leading data science initiatives, developing advanced analytics models and ensuring successful execution of data-driven projects for clients in the retail. Will work closely with key client stakeholders to understand their business challenges and leverage data science to deliver actionable insights that drive business growth and efficiency. Lead the design, development and implementation of advanced analytics models. Including predictive and prescriptive models for retail clients.Should be able to convert mathematical/ statistics-based research into sustainable data science solutions Candidate should be able to think from first principles to define & evangelize solutions for any client business problem Leverage deep knowledge of the retail to develop data-driven solutions that address industry-specific challenges. Apply AI/ML statistical methods to solve complex business problems and determine new opportunities for clients. Ensure project delivery of high-quality, actionable insights that drive business decisions and outcomes. Ensure end-to-end lifecycle (scoping to Delivery) of data science projects. Collaborate with cross-functional teams to ensure seamless project execution.Manage timelines, resources, and deliverables to meet client expectations and project goals. Drive innovation by exploring new data science techniques, tools, and technologies that can enhance the value delivered to clients. Strong hands-on experience with data science tools and technologies (e.g., Python, R, SQL, machine learning frameworks). Hand-on experience with a range of data science models including regression, classification, clustering, decision tree, random forest, support vector machine, naïve Bayes, GBM, XGBoost, multiple linear regression, logistic regression, and ARIMA/ARIMAX. Should be competent in Python (Pandas, NumPy, scikit-learn etc.), possess high levels of analytical skills and have experience in the creation and/or evaluation of predictive models. Preferred experience in areas such as time series analysis, market mix modelling, attribution modelling, churn modelling, market basket analysis, etc Good communication and project management skills. Should be able to communicate effectively to a wide range of audiences, both technical and business. Adept in creating Presentations, reports etc to present the analysis findings to key client stakeholders. Strong team management skills with a passion for mentoring and developing talent. Qualifications Educational Qualification:BTech/Masters in Statistics/Mathematics/Economics/Econometrics from Tier 1-2 institutions Or BE/B-Tech, MCA or MBA
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
3.0 - 6.0 years
8 - 12 Lacs
Surat
Remote
The JD will be emailed to the interested candidates.
Posted 1 week ago
3.0 years
0 Lacs
Jaipur, Rajasthan, India
On-site
Role Summary 1. Demonstrate solid proficiency in Python development, writing clean, maintainable code. 2. Collaborator in the design and implementation of AI-driven applications leveraging large language models (LLMs). 3. Develop and maintain Django-based RESTful APIs to support backend services. 4. Integrate with LLM provider APIs (e.g., GPT, Claude, Cohere) and agent frameworks (LangChain, AgentStudio). 5. Build and optimize data pipelines for model training and inference using Pandas, NumPy, and Scikit-learn. 6. Ensure robust unit and integration testing via pytest to maintain high code quality. 7. Participate in agile ceremonies, contributing estimations, design discussions, and retrospectives. 8. Troubleshoot, debug, and optimize performance in multi-threaded and distributed environments. 9. Document code, APIs, and data workflows in accordance with software development best practices. 10. Continuously learn and apply new AI/ML tools, frameworks, and cloud services. Key Responsibilities 1. Write, review, and optimize Python code for backend services and data science workflows. 2. Design and implement Django REST APIs, ensuring scalability and security. 3. Integrate LLMs into applications: handle prompt construction, API calls, and result processing. 4. Leverage agent frameworks (LangChain, AgentStudio) to orchestrate complex LLM workflows. 5. Develop and maintain pytest suites covering unit, integration, and end-to-end tests. 6. Build ETL pipelines to preprocess data for model training and feature engineering. 7. Work with relational databases (PostgreSQL) and vector stores (FAISS, Weaviate, Milvus). 8. Containerize applications using Docker and deploy on Kubernetes or serverless platforms (AWS, GCP, Azure). 9. Monitor and troubleshoot application performance, logging, and error handling. 10. Collaborate with data scientists to deploy and serve ML models via FastAPI or vLLM. 11. Maintain CI/CD pipelines for automated testing and deployment. 12. Engage in technical learning sessions and share best practices across the team. Desired Skills & Qualifications - 1–3 years of hands-on experience in Python application development. - Proven pytest expertise, with a focus on test-driven development. - Practical knowledge of Django (or FastAPI) for building RESTful services. - Experience with LLM APIs (OpenAI, Anthropic, Cohere) and prompt engineering. - Familiarity with at least one agent framework (LangChain, AgentStudio). - Working experience in data science libraries: NumPy, Pandas, Scikit-learn. - Exposure to ML model serving tools (MLflow, FastAPI, vLLM). - Experience with container orchestration (Docker, Kubernetes, Docker Swarm). - Basic understanding of cloud platforms (AWS, Azure, or GCP). - Knowledge of SQL and database design; familiarity with vector databases. - Eagerness to learn emerging AI/ML technologies and frameworks. - Excellent problem-solving, debugging, and communication skills. Education & Attitude - Bachelor’s or Master’s in Computer Science, Data Science, Statistics, Mathematics, or related field. - Growth-mindset learner: proactive in upskilling and sharing knowledge. - Strong collaboration ethos and adaptability in a fast-paced AI environment.
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Join us at Seismic, a cutting-edge technology company leading the way in the SaaS industry. We specialize in delivering modern, scalable, and multi-cloud solutions that empower businesses to succeed in today's digital era. Leveraging the latest advancements in technology, including Generative AI, we are committed to driving innovation and transforming the way businesses operate. As we embark on an exciting journey of growth and expansion, we are seeking a talented Data Scientist to join our AI team in Hyderabad, India. As a Senior Data Scientist II , you will architect and develop complex AI applications, lead strategic technical initiatives, and mentor other data scientists and engineers to build next-gen AI capabilities. You will own high-impact projects and work with cross-functional teams to design, build, and maintain scalable, high-performance models and AI applications that deliver exceptional value to our customers. This position offers a unique opportunity to make an impact on our company's growth and success by bringing AI-powered features to life across our platform, including content discovery, learning and coaching, meeting intelligence and various AI capabilities. This is a hands-on, high-ownership role ideal for a senior individual contributor ready to scale AI in production. Seismic AI AI is one of the fastest-growing product areas in Seismic. We believe that AI, particularly Generative AI, will empower and transform how Enterprise sales and marketing organizations operate and interact with customers. Seismic Aura, our leading AI engine, is powering this change in the sales enablement space and is being infused across the Seismic enablement cloud. Our focus is to leverage AI across the Seismic platform to make our customers more productive and efficient in their day-to-day tasks, and to drive more successful sales outcomes. Why Join Us Opportunity to be a key contributor in a rapidly growing company and drive innovation in the SaaS industry. Work with cutting-edge technologies and be at the forefront of AI advancements. Competitive compensation package, including salary, bonus, and equity options. A supportive, inclusive work culture. Professional development opportunities and career growth potential in a dynamic and collaborative environment. At Seismic, we’re committed to providing benefits and perks for the whole self. To explore our benefits available in each country, please visit the Global Benefits page. Please be aware we have noticed an increase in hiring scams potentially targeting Seismic candidates. Read our full statement on our Careers page. Seismic is the global leader in AI-powered enablement, empowering go-to-market leaders to drive strategic growth and deliver exceptional customer experiences at scale. The Seismic Enablement Cloud™ is the only unified AI-powered platform that prepares customer-facing teams with the skills, content, tools, and insights needed to maximize every buyer interaction and strengthen client relationships. Trusted by more than 2,000 organizations worldwide, Seismic helps businesses achieve measurable outcomes and accelerate revenue growth. Seismic is headquartered in San Diego with offices across North America, Europe, Asia and Australia. Learn more at seismic.com. Seismic is committed to building an inclusive workplace that ignites growth for our employees and creates a culture of belonging that allows all employees to be seen and valued for who they are. Learn more about DEI at Seismic here. Key Responsibilities AI Application Development: Architect and develop robust and scalable data science and AI applications, including NLP, classification, retrieval-augmented generation (RAG), and conversational/agentic, and multi-agent systems. Mentorship and Leadership: Mentor and coach junior and mid-level data scientists and engineers, promoting technical excellence and knowledge sharing across the team. Model Optimization: Design and implement high-performance machine learning models and pipelines. Continuously improve latency, robustness, accuracy, and scalability. Innovation and Technology Adoption: Evaluate and implement new tools, frameworks, and methodologies to improve the efficiency and quality of AI systems. Product Integration: Collaborate with AI engineers, software engineers, and product managers to seamlessly integrate AI features across Seismic’s products. Cross-functional Collaboration: Partner with UX, engineering, and product teams to design end-to-end intelligent experiences for our users. Decision-making and Ownership: Take ownership of complex decisions within your domain and contributing to goal-setting and strategy. Experience Bachelor’s degree and 8+ years of experience, or an advanced degree (Master’s or PhD) with 6+ years of industry experience in data science or AI. Technical Expertise Deep knowledge of AI and data science, including Generative AI, LLMs (OpenAI, Azure, Google, open-source), RAG pipelines, prompt engineering, NLP, and image models. Strong proficiency in Python, along with libraries such as Pandas, NumPy, Scikit-learn, PyTorch, or TensorFlow. Hands-on experience with HuggingFace, LangChain, and cloud-native AI services. Cloud Expertise Experience with AWS, Azure, or GCP for model training, deployment, and data workflows. Familiarity with MLOps and model lifecycle management in production environments. Product Thinking Ability to translate complex business challenges into technical solutions that scale. Experience in collaborating with product management and design in a product triad model. Proven track record of delivering AI-powered features from concept to production. Education Bachelor’s or Master’s degree in Computer Science, Data Science, Engineering, or a related field. Other Skills Excellent communication and collaboration skills. Strong decision-making ability and a thoughtful, data-driven approach to solving problems. Experience working in a fast-paced SaaS or tech-driven environment. If you are an individual with a disability and would like to request a reasonable accommodation as part of the application or recruiting process, please click here. Headquartered in San Diego and with employees across the globe, Seismic is the global leader in sales enablement , backed by firms such as Permira, Ameriprise Financial, EDBI, Lightspeed Venture Partners, and T. Rowe Price. Seismic also expanded its team and product portfolio with the strategic acquisitions of SAVO, Percolate, Grapevine6, and Lessonly. Our board of directors is composed of several industry luminaries including John Thompson, former Chairman of the Board for Microsoft. Seismic is an equal opportunity employer and all qualified applicants will receive consideration for employment without regard to gender, age, race, religion, or any other classification which is protected by applicable law. Please note this job description is not designed to cover or contain a comprehensive listing of activities, duties or responsibilities that are required of the employee for this job. Duties, responsibilities and activities may change at any time with or without notice.
Posted 1 week ago
5.0 - 10.0 years
6 - 10 Lacs
Vadodara
Work from Office
We are seeking a highly skilled and experienced Senior Deep Learning Engineer to join our team. This individual will lead the design, development, and deployment of cutting-edge deep learning models and systems. The ideal candidate is passionate about leveraging state-of-the-art machine learning techniques to solve complex real-world problems, thrives in a collaborative environment, and has a proven track record of delivering impactful AI solutions. Key Responsibilities: Model Development and Optimization: Design, train, and deploy advanced deep learning models for various applications such as computer vision, natural language processing, speech recognition, and recommendation systems. Optimize models for performance, scalability, and efficiency on various hardware platforms (e.g., GPUs, TPUs). Research and Innovation: Stay updated with the latest advancements in deep learning, AI, and related technologies. Develop novel architectures and techniques to push the boundaries of whats possible in AI applications System Design and Deployment: Architect and implement scalable and reliable machine learning pipelines for training and inference. Collaborate with software and DevOps engineers to deploy models into production environments Collaboration and Leadership: Work closely with cross-functional teams, including data scientists, product managers, and software engineers, to define project goals and deliverables. Provide mentorship and technical guidance to junior team members and peers. Data Management: Collaborate with data engineering teams to preprocess, clean, and augment large datasets. Develop tools and processes for efficient data handling and annotation Performance Evaluation: Define and monitor key performance metrics (KPIs) to evaluate model performance and impact. Conduct rigorous A/B testing and error analysis to continuously improve model outputs. Qualifications and Skills: Education: Bachelors or Masters degree in Computer Science, Electrical Engineering, or a related field. PhD preferred. Experience: 5+ years of experience in developing and deploying deep learning models. Proven track record of delivering AI-driven products or research with measurable impact. Technical Skills: Proficiency in deep learning frameworks such as TensorFlow, PyTorch, or JAX. Strong programming skills in Python, with experience in libraries like NumPy, Pandas, and Scikit-learn. Familiarity with distributed computing frameworks such as Spark or Dask. Hands-on experience with cloud platforms (AWS or GCP) and containerization tools (Docker, Kubernetes). Domain Expertise: Experience with at least one specialized domain, such as computer vision, NLP, or time-series analysis. Familiarity with reinforcement learning, generative models, or other advanced AI techniques is a plus. Soft Skills: Strong problem-solving skills and the ability to work independently. Excellent communication and collaboration abilities. Commitment to fostering a culture of innovation and excellence.
Posted 1 week ago
0.0 - 2.0 years
8 - 12 Lacs
Bengaluru
Work from Office
We are looking for a passionate Python developer to join our team at Billions United. You will be responsible for developing and implementing high-quality software solutions, creating complex applications using cutting-edge programming features and frameworks, and collaborating with other teams to define, design, and ship new features. You will also be working on data engineering problems and building data pipelines. Objectives of this Role Develop, test, and maintain high-quality software using Python programming language. Participate in the entire software development lifecyclebuilding, testing, and delivering high-quality solutions. Collaborate with cross-functional teams to identify and solve complex problems. Write clean and reusable code that can be easily maintained and scaled. Your Tasks Create large-scale data processing pipelines to support machine learning development. Participate in code reviews to ensure code quality and identify areas for improvement. Debug code and troubleshoot Python-related issues. Stay updated with emerging trends and technologies in Python development. Required Skills and Qualifications Freshers or 1+ years of experience as a Python Developer with a strong portfolio of projects. Bachelor's degree in Computer Science, Software Engineering, or a related field. In-depth knowledge of Python stacks, frameworks, and tools such as NumPy, SciPy, Pandas, Dask, spaCy, NLTK, Scikit-learn, and PyTorch. Experience with front-end technologies like HTML, CSS, and JavaScript. Familiarity with SQL and NoSQL databases. Excellent problem-solving, communication, and collaboration skills. Preferred Skills and Qualifications Experience with frameworks such as Django, Flask, or Pyramid. Knowledge of data science and machine learning concepts and tools. Experience with cloud platforms such as AWS, Google Cloud, or Azure. Contributions to open-source Python projects or active involvement in the Python community.
Posted 1 week ago
6.0 years
0 Lacs
Bangalore Urban, Karnataka, India
On-site
Job Role: Data Scientist Location: Bangalore, Chennai, Hyderabad, Pune, Kochi, Coimbatore Experience : 6 - 10 Years Job Description Mandatory Skills: 5+ years of experience in AI/ML, data engineering, or software development Strong Programming Skills In Python Familiarity with common data science libraries (e.g., NumPy, pandas, scikit-learn) Understanding of machine learning, deep learning, and NLP concepts Basic knowledge of API integrations Passion for Generative AI with excellent problem-solving and team collaboration skills Nice To Have Skills Experience with Amazon Bedrock Experience With Claude API (Anthropic) Knowledge of Retrieval-Augmented Generation (RAG) or multi-agent systems (e.g., CrewAI)
Posted 1 week ago
3.0 - 6.0 years
0 - 12 Lacs
Surat
Work from Office
The JD will be emailed to the interested candidates. Work from home
Posted 1 week ago
11.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description: - You'll be Responsible for: Design, develop and implement cutting-edge AI/ML solutions, including Large Language Models (LLMs) and Generative AI applications Lead projects end-to-end while mentoring team members in AI-ML, including traditional ML and emerging AI technologies Drive innovation in AI agent development and orchestration for automated decisionmaking systems Establish best practices for responsible AI development and deployment What you'd have: AI-ML Engineer or Data Scientist with 3 – 11 years of relevant experience Strong expertise in modern AI frameworks and tools: ML/DL frameworks (TensorFlow, PyTorch, Keras, Sklearn) LLM frameworks (LangChain, LlamaIndex, CrewAI, AutoGen) Vector databases (Pinecone, Weaviate, ChromaDB) – Good To Have Hands-on experience with: Generative AI and foundation models Prompt engineering and LLM fine-tuning AI agent development and orchestration RAG (Retrieval-Augmented Generation) systems Proven experience in data preparation, including: Advanced data preprocessing techniques Feature engineering Data quality assessment and improvement Proficiency in data exploration tools (Pandas, Numpy, Polars) Strong programming skills in Python; R is a plus Deep understanding of: Statistics and probability ML/DL algorithms AI governance and responsible AI practices Experience with: Large-scale data processing Model deployment and MLOps Cost optimization for AI systems Track record of 3-5 successful AI/ML projects in production Excellent communication skills and team leadership ability Continuous learning mindset for emerging AI technologies Skills: Statistics and Mathematics Classical Machine Learning Deep Learning and Neural Networks Generative AI and LLMs Agentic AI Development MLOps and Production Engineering Data Engineering Model Optimization and Tuning AI Governance and Ethics Team Leadership Why join us? Impactful Work: Play a pivotal role in safeguarding Tanla's assets, data, and reputation in the industry. Tremendous Growth Opportunities: Be part of a rapidly growing company in the telecom and CPaaS space, with opportunities for professional development. Innovative Environment: Work alongside a world-class team in a challenging and fun environment, where innovation is celebrated. Tanla is an equal opportunity employer. We champion diversity and are committed to creating an inclusive environment for all employees. www.tanla.com
Posted 1 week ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Job Title: Data Science Candidate Specification: 6+ years, Notice � Immediate to 15 days, Hybrid model. Job Description 5+ years of hands-on experience as an AI Engineer, Machine Learning Engineer, or a similar role focused on building and deploying AI/ML solutions. Strong proficiency in Python and its relevant ML/data science libraries (e.g., NumPy, Pandas, Scikit-learn, TensorFlow, PyTorch). Extensive experience with at least one major deep learning framework such as TensorFlow, PyTorch, or Keras. Solid understanding of machine learning principles, algorithms (e.g., regression, classification, clustering, ensemble methods), and statistical modeling. Experience with cloud platforms (e.g., AWS, Azure, GCP) and their AI/ML services (e.g., SageMaker, Azure ML, Vertex AI). Skills Required RoleData Science ( AI ML ) Industry TypeIT Services & Consulting Functional AreaIT-Software Required Education Bachelor Degree Employment TypeFull Time, Permanent Key Skills DATA SCIENCE AI ENGINEER MACHINE LEARNING DATA SCIENCE AI ML PYTHON AWS Other Information Job CodeGO/JC/686/2025 Recruiter NameSheena Rakesh
Posted 1 week ago
11.0 - 20.0 years
90 - 95 Lacs
Bengaluru
Work from Office
Lead data science team to deliver ML/NLP solutions. Build scalable models using Python, ensure data governance, collaborate across teams, and present insights. Requires expertise in ML, NLP, Python, data visualization, and strong leadership.
Posted 1 week ago
0 years
0 Lacs
India
On-site
Sanctity AI is a Netherlands-based startup founded by an IIT alum, specializing in ethical, safe, and impactful artificial intelligence. Our agile team is deeply focused on critical areas like AI alignment, responsible LLM training, prompt orchestration, and advanced agent infrastructure. In a landscape where many talk ethics, we build and deploy solutions that genuinely embody ethical AI principles. Sanctity AI is positioned at the forefront of solving real-world alignment challenges, shaping the future of trustworthy artificial intelligence. We leverage proprietary algorithms, rigorous ethical frameworks, and cutting-edge research to deliver AI solutions with unparalleled transparency, robustness, and societal impact. Sanctity AI represents a rare opportunity in the rapidly evolving AI ecosystem, committed to sustainable innovation and genuine human-AI harmony. The Role As an AI ML Intern reporting directly to the founder, you’ll go beyond just coding. You’ll own whole pipelines—from data wrangling to deploying cutting-edge ML models in production. You’ll also get hands-on experience with large language models (LLMs), prompt engineering, semantic search, and retrieval-augmented generation. Whether it’s spinning up APIs in FastAPI, containerizing solutions with Docker, or exploring vector and graph databases like Pinecone and Neo4j, you’ll be right at the heart of our AI innovation. What You’ll Tackle Data to Insights: Dive into heaps of raw data, and turn it into actionable insights that shape real decisions. Model Building & Deployment: Use Scikit-learn, XGBoost, LightGBM, and advanced deep learning frameworks (TensorFlow, PyTorch, Keras) to develop state-of-the-art models. Then, push them to production—scaling on AWS, GCP, or other cloud platforms. LLM & Prompt Engineering: Fine-tune and optimize large language models. Experiment with prompt strategies and incorporate RAG (Retrieval-Augmented Generation) for more insightful outputs. Vector & Graph Databases: Implement solutions using Pinecone, Neo4j, or similar technologies for advanced search and data relationships. Microservices & Big Data: Leverage FastAPI (or similar frameworks) to build robust APIs. If you love large-scale data processing, dabble in Apache Spark, Hadoop, or Kafka to handle the heavy lifting. Iterative Improvement: Observe model performance, gather metrics, and keep refining until the results shine. Who You Are Python Pro: You write clean, efficient Python code using libraries like Pandas, NumPy, and Scikit-learn. Passionate About AI/ML: You’ve got a solid grasp of algorithms and can’t wait to explore deep learning or advanced NLP. LLM Enthusiast: You’re familiar with training or fine-tuning large language models and love the challenge of prompt engineering. Cloud & Containers Savvy: You’ve at least toyed with AWS, GCP, or similar, and have some experience with Docker or other containerization tools. Data-Driven & Detail-Oriented: You enjoy unearthing insights in noisy datasets and take pride in well-documented, maintainable code. Curious & Ethical: You believe AI should be built responsibly and love learning about new ways to do it better. Languages: You can fluently communicate complex technical ideas in English. Fluency in Dutch, Spanish or French is a plus. Math Wizard: You have a strong grip on Advanced Mathematics and Statistical modeling. This is a core requirement. Why Join Us? Real-World Impact: Your work will address real world and industry challenges—problems that genuinely need AI solutions. Mentorship & Growth: Team up daily with founders and seasoned AI pros, accelerating your learning and skill-building. Experimentation Culture: We encourage big ideas and bold experimentation. Want to try a new approach? Do it. Leadership Path: Show us your passion and skills, and you could move into a core founding team member role, shaping our future trajectory. Interested? Send over your résumé, GitHub repos, or any project links that showcase your passion and talent. We can’t wait to see how you think, build, and innovate. Let’s team up to create AI that isn’t just powerful—but also responsibly built for everyone.
Posted 1 week ago
7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Title: Manager – Senior ML Engineer (Full Stack) About Firstsource Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes. Job Summary : The Manager – Senior ML Engineer (Full Stack) will be responsible for leading the development and integration of Generative AI (GenAI) technologies, writing code modules, and managing full-stack development projects. The ideal candidate will have a strong background in Python and a proven track record in machine learning and full-stack development. Required Skills Strong proficiency in Python programming. Experience with data analysis and visualization libraries like Pandas, NumPy, Matplotlib, and Seaborn. Proven experience in machine learning and AI development. Experience with Generative AI (GenAI) development and integration. Full-stack development experience, including front-end and back-end technologies. Proficiency in web development frameworks such as Django or Flask. Knowledge of machine learning frameworks such as TensorFlow, Keras, PyTorch, or Scikit-learn. Experience with RESTful APIs and web services integration. Familiarity with SQL and NoSQL databases, such as PostgreSQL, MySQL, MongoDB, or Redis. Experience with cloud platforms like AWS, Azure, or Google Cloud. Knowledge of DevOps practices and tools like Docker, Kubernetes, Jenkins, and Git. Proficiency in writing unit tests and using debugging tools. Effective communication and interpersonal skills. Ability to work in a fast-paced, dynamic environment. Knowledge of software development best practices and methodologies. Key Responsibilities Lead the development and integration of Generative AI (GenAI) technologies to enhance our product offerings. Write, review, and maintain code modules, ensuring high-quality and efficient code. Oversee full-stack development projects, ensuring seamless integration and optimal performance. Collaborate with cross-functional teams to define project requirements, scope, and deliverables. Manage and mentor a team of developers and engineers, providing guidance and support to achieve project goals. Stay updated with the latest industry trends and technologies to drive innovation within the team. Ensure compliance with best practices in software development, security, and data privacy. Troubleshoot and resolve technical issues in a timely manner. Qualifications Bachelor’s degree in computer science or an Engineering degree Minimum of 7 years of experience in machine learning engineering or a similar role. Demonstrated experience in managing technology projects from inception to completion.
Posted 1 week ago
0.0 - 1.0 years
1 - 3 Lacs
Chennai
Work from Office
Job Title: Freshers - Data Science Company: Digient Technologies Pvt. Ltd. Location: Chennai (Onsite) Experience: 0 1 Year Functional Area: Data Science & Analytics About Us Digient Technologies is a dynamic and innovative B2B online iGaming company, dedicated to providing cutting-edge solutions in the rapidly evolving iGaming industry. We pride ourselves on delivering high-quality products and services that enable our clients to stay ahead in the competitive landscape of online iGaming. Overview Digient Technologies is looking for fresh graduates who are passionate about data to join our team as Data Science Freshers . Selected candidates will undergo a structured 3-month comprehensive program covering Python, SQL, Machine Learning, and Data Analytics. Upon successful completion, they will be absorbed as full-time employees at Digient Technologies. Key Responsibilities Participate in our structured comprehensive program covering Python, Machine Learning, Data Analytics, and related tools Assist in building and maintaining data models and analytical solutions Collect, clean, and analyze data to derive meaningful insights Support in building dashboards, reports, and predictive models Collaborate with cross-functional teams to solve business problems using data Document workflows, findings, and recommendations Eligibility Criteria Bachelors degree in Computer Science, Data Science, Mathematics, Statistics, or related field Basic knowledge of Python, SQL, and statistics Strong analytical thinking and problem-solving skills Passion for working with data and numbers Good communication and interpersonal skills Willingness to learn and take ownership of tasks Good to Have Familiarity with libraries like Pandas, NumPy, Scikit-learn, or similar Exposure to data visualization tools like Power BI, Tableau, or Matplotlib Basic understanding of Machine Learning concepts
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France