Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
7.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Job Description In This Role, Your Responsibilities Will Be: Analyze large, complex data sets using statistical methods and machine learning techniques to extract meaningful insights. Develop and implement predictive models and algorithms to solve business problems and improve processes. Create visualizations and dashboards to effectively communicate findings and insights to stakeholders. Work with data engineers, product managers, and other team members to understand business requirements and deliver solutions. Clean and preprocess data to ensure accuracy and completeness for analysis. Prepare and present reports on data analysis, model performance, and key metrics to stakeholders and management. Participate in regular Scrum events such as Sprint Planning, Sprint Review, and Sprint Retrospective Stay updated with the latest industry trends and advancements in data science and machine learning techniques Who You are: You must be committed to self-development means you must look for ways to build skills that you will need in the future. You must learn and grow from experience. Opportunities will be available and you must be able to stretch yourself to execute better and be flexible to take up new activities.. For This Role, You Will Need: Bachelor’s degree in computer science, Data Science, Statistics, or a related field or a master's degree or higher is preferred. Total 7-10 years of industry experience More than 5 years of experience in a data science or analytics role, with a strong track record of building and deploying models. Excellent understanding of machine learning techniques and algorithms, such as GPTs, CNN, RNN, k-NN, Naive Bayes, SVM, Decision Forests, etc. Experience with NLP, NLG, and Large Language Models like – GPT , BERT, LLaMa, LaMDA, GPT, BLOOM, PaLM, DALL-E, etc. Proficiency in programming languages such as Python or R, and experience with data manipulation libraries (e.g., pandas, NumPy). Experience with machine learning frameworks and libraries such as Go, TensorFlow, PyTorch Familiarity with data visualization tools (e.g., Tableau, Power BI, Matplotlib, Seaborn). Experience with SQL and NoSQL databases such as MongoDB, Cassandra, Vector databases Strong analytical and problem-solving skills, with the ability to work with complex data sets and extract actionable insights. Excellent verbal and written communication skills, with the ability to present complex technical information to non-technical stakeholders. Preferred Qualifications that Set You Apart: Prior experience in engineering domain would be nice to have Prior experience in working with teams in Scaled Agile Framework (SAFe) is nice to have Possession of relevant certification/s in data science from reputed universities specializing in AI. Familiarity with cloud platforms, Microsoft Azure is preferred Ability to work in a fast-paced environment and manage multiple projects simultaneously. Strong analytical and troubleshooting skills, with the ability to resolve issues related to model performance and infrastructure. Our Commitment to Diversity, Equity & Inclusion At Emerson, we are committed to fostering a culture where every employee is valued and respected for their unique experiences and perspectives. We believe a diverse and inclusive work environment contributes to the rich exchange of ideas and diversity of thoughts, that inspires innovation and brings the best solutions to our customers. This philosophy is fundamental to living our company’s values and our responsibility to leave the world in a better place. Learn more about our Culture & Values and about Diversity, Equity & Inclusion at Emerson . If you have a disability and are having difficulty accessing or using this website to apply for a position, please contact: idisability.administrator@emerson.com .
Posted 1 month ago
3.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Position: Data Scientist - II Experience Level: 3-6 Years Industry: E-commerce Location: Vikhroli, Mumbai Employment Type: Full-time Advanced Data Analysis: Conduct in-depth analyses of large datasets from multiple sources, such as clickstream data, sales transactions, and user behavior, to uncover actionable insights. Machine Learning: Develop, implement, and maintain sophisticated machine learning models for use cases including recommendations, personalization, customer segmentation, demand forecasting, and price optimization. A/B Testing: Design and analyze experiments to evaluate the impact of new product features, marketing campaigns, and user experiences on business metrics. Data Engineering Collaboration: Work closely with data engineers to ensure robust, accurate, and scalable data pipelines for analysis and model deployment. Cross-functional Collaboration: Partner with product, marketing, and engineering teams to identify data needs, define analytical approaches, and deliver impactful insights. Dashboard Development: Create and maintain dashboards using modern visualization tools to present findings and track key performance metrics. Exploratory Data Analysis: Investigate trends, anomalies, and patterns in data to guide strategy and optimize performance across various business units. Optimization Strategies: Apply statistical and machine learning methods to optimize critical areas such as supply chain operations, customer acquisition, retention strategies, and pricing models. Required Skills Programming: Proficiency in Python (preferred) or R for data analysis and machine learning. SQL Expertise: Advanced skills in querying and managing large datasets. Machine Learning Frameworks: Hands-on experience with tools like Scikit-learn, TensorFlow, or PyTorch. Data Processing: Strong expertise in data wrangling and transformation for model readiness. A/B Testing: Deep understanding of experimental design and statistical inference. Visualization: Experience with tools such as Tableau, Power BI, Matplotlib, or Seaborn to create insightful visualizations. Statistics: Strong foundation in probability, hypothesis testing, and predictive modeling techniques. Communication: Exceptional ability to translate technical findings into actionable business insights. Preferred Qualifications Domain Knowledge: Prior experience with e-commerce datasets, including user behavior, transaction data, and inventory management. Big Data: Familiarity with Hadoop, Spark, or BigQuery for managing and analyzing massive datasets. Cloud Platforms: Proficiency with cloud platforms like AWS, Google Cloud Platform (GCP), or Azure for data storage, computation, and model deployment. Business Acumen: Understanding of critical e-commerce metrics such as conversion rates, customer lifetime value (LTV), and customer acquisition costs (CAC). Educational Qualifications Bachelor’s or Master’s degree in Data Science, Computer Science, Mathematics, Statistics, or a related quantitative field. Advanced certifications in machine learning or data science are a plus. About Company Founded in 2012, Purplle has emerged as one of India’s premier omnichannel beauty destinations, redefining the way millions shop for beauty. With 1,000+ brands, 60,000+ products, and over 7 million monthly active users, Purplle has built a powerhouse platform that seamlessly blends online and offline experiences. Expanding its footprint in 2022, Purplle introduced 6,000+ offline touchpoints and launched 8 exclusive stores, strengthening its presence beyond digital. Beyond hosting third-party brands, Purplle has successfully scaled its own D2C powerhouses—FACES CANADA, Good Vibes, Carmesi, Purplle, and NY Bae—offering trend-driven, high-quality beauty essentials. What sets Purplle apart is its technology driven hyper-personalized shopping experience. By curating detailed user personas, enabling virtual makeup trials, and delivering tailored product recommendations based on personality, search intent, and purchase behavior, Purplle ensures a unique, customer-first approach. In 2022, Purplle achieved unicorn status, becoming India’s 102nd unicorn, backed by an esteemed group of investors including ADIA, Kedaara, Premji Invest, Sequoia Capital India, JSW Ventures, Goldman Sachs, Verlinvest, Blume Ventures, and Paramark Ventures. With a 3,000+ strong team and an unstoppable vision, Purplle is set to lead the charge in India’s booming beauty landscape, revolutionizing the way the nation experiences beauty.
Posted 1 month ago
2.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Organization and Background Established in 1996, Esri India Technologies Pvt. Ltd. (Esri India), the market leader in geographic information system (GIS) software, location intelligence, and mapping solutions in India, helps customers unlock the maximum potential of their data to improve operational and business decisions. It has delivered pioneering enterprise GIS technology, powered by ArcGIS, to more than 6,500 organizations in government, private sector, academia, and non-profit sectors. The company has also introduced ‘Indo ArcGIS’, a unique GIS solution & data offering suited for government organizations. Esri India collaborates with a rich ecosystem of partner organizations to deliver GIS and location intelligence-based solutions. Headquartered in Noida (Delhi NCR), the company has 1 million users in the country and has got Great Place to Work Certified® in 2021, 2022, and 2023. Website:www.esri.in Role overview This position will work closely with customer to understand their needs to develop and deliver models for India specific GeoAI use cases. He/she will be responsible for conceptualizing and developing solutions using ESRI products. Additionally, the role demands representing the organization at conferences and forums, showcasing expertise and promoting ESRI solutions. Should be capable of working independently, exhibiting strong problem-solving skills, and effectively communicating complex geospatial concepts to diverse audiences. Roles & Responsibilities Consult closely with customers to understand their needs. Develop and pitch data science solutions by mapping business problems to machine learning or advanced analytics approaches. Build high-quality analytics systems that solve our customers’ business problems using techniques from data mining, statistics and machine learning. Write clean, collaborative and version-controlled code to process big data and streaming data from a variety of sources and types. Perform feature engineering, model selection and hyperparameter optimization to yield high predictive accuracy and deploy the model to production in a cloud, on-premises or hybrid environment. Implement best practices and patterns for geospatial machine learning and develop reusable technical components for demonstrations and rapid prototyping. Integrate ArcGIS with popular deep learning libraries such as PyTorch. Keep up to date with the latest technology trends in machine and deep learning and incorporate them in project delivery. Support in estimation and feasibility for various RFPs. Desired skillset 2+ years of practical machine learning experience or applicable academic/lab work Proven data science and AI skills with Python, PyTorch and Jupyter Notebooks Experience in building and optimizing supervised and unsupervised machine learning models including deep learning and various other modern data science techniques Expertise in one or more of the following areas: Traditional and deep learning-based computer vision techniques with the ability to develop deep learning models for computer vision tasks (image classification, object detection, semantic and instance segmentation, GANs, super-resolution, image inpainting, and more) Convolutional neural networks such as VGG, ResNet, Faster R-CNN, Mask R-CNN, and others Transformer models applied to computer vision o Expertise in 3D deep learning with Point Clouds, meshes, or Voxels with the ability to develop 3D geospatial deep learning models, such as PointCNN, MeshCNN, and more A fundamental understanding of mathematical and machine learning concepts such as calculus, back propagation, ReLU, Bayes’ theorem, Random Forests, time series analysis, etc Experience with applied statistics concept. Ability to perform data extraction, transformation, loading from multiple sources and sinks Experience in data visualization in Jupyter Notebooks using matplotlib and other libraries Experience with hyperparameter-tuning and training models to a high level of accuracy Experience in LLMs is preferred. Self-motivated, life-long learner. Non-negotiable skills Master's degree in RS, GIS, Geoinformatics, or a related field with knowledge of RS & GIS Knowledge and experience of ESRI products like ArcGIS Pro, ArcGIS Online, ArcGIS Enterprise, etc Experience with Image analysis and Image processing techniques in SAR, Multispectral and Hyperspectral imagery. Strong communication skills, including to non-technical audiences Should have strong Python coding skills. Should be open to travel and ready to work at client side (India).
Posted 1 month ago
0 years
0 Lacs
Nilambūr
On-site
Position: AI/ML Trainer in Python Type: Part-Time Location: Nilambur, Malappuram --- About the Role: We are seeking an experienced AI/ML Trainer proficient in Python to join our team. This role is ideal for a candidate passionate about teaching, guiding learners, and helping them gain practical, hands-on skills in AI and machine learning. As a trainer, you will deliver engaging sessions, create learning materials, and provide mentorship to learners. --- **Key Responsibilities:** - **Deliver Training Sessions:** Conduct engaging and effective training sessions covering AI/ML fundamentals, Python programming, and advanced AI/ML concepts. - **Curriculum Development:** Collaborate on designing course content, tutorials, and exercises for a comprehensive curriculum, including machine learning, deep learning, and AI applications. - **Hands-On Guidance:** Guide learners through hands-on projects and coding exercises, providing support and insights to help them achieve practical skills. - **Mentorship:** Provide personalized feedback, conduct one-on-one sessions as needed, and support learners throughout their educational journey. - **Assessment & Evaluation:** Prepare assessments, quizzes, and other evaluation materials to track and assess student progress. - **Stay Updated:** Keep up-to-date with advancements in AI/ML and Python, incorporating new trends and tools into the curriculum. --- **Required Skills and Qualifications:** - **Technical Expertise:** Strong proficiency in Python, with experience in machine learning libraries such as TensorFlow, PyTorch, scikit-learn, and Keras. - **Experience in AI/ML:** Solid knowledge of AI and machine learning concepts, including supervised/unsupervised learning, neural networks, deep learning, natural language processing, and data pre-processing. - **Teaching Experience:** Prior experience in teaching, tutoring, or training in a technical field is highly desirable. Ability to explain complex concepts in an easy-to-understand manner. - **Project-Based Learning Approach:** Familiarity with project-based learning and ability to guide students through real-world projects. - **Strong Communication Skills:** Excellent verbal and written communication skills, with a focus on clarity and student engagement. - **Adaptability:** Ability to adapt teaching methods to suit diverse learning styles and skill levels. --- **Preferred Qualifications:** - Bachelor's degree in Computer Science, Data Science, Artificial Intelligence, or a related field. - Industry experience in AI/ML projects or applications. - Experience with data visualization tools like Matplotlib or Seaborn. - Knowledge of deployment and production workflows for machine learning models (e.g., Flask, Docker, etc.). --- Job Types: Part-time, Contractual / Temporary, Freelance Contract length: 3 months Pay: ₹500.00 per day Expected hours: No more than 2 per week Schedule: Day shift Night shift Work Location: In person Application Deadline: 06/07/2025 Expected Start Date: 07/07/2025
Posted 1 month ago
35.0 years
0 Lacs
Chennai
On-site
About us One team. Global challenges. Infinite opportunities. At Viasat, we’re on a mission to deliver connections with the capacity to change the world. For more than 35 years, Viasat has helped shape how consumers, businesses, governments and militaries around the globe communicate. We’re looking for people who think big, act fearlessly, and create an inclusive environment that drives positive impact to join our team. What you'll do Parse and manipulate raw data leveraging tools including R, Python, Tableau, with a strong preference for Python Ingest, understand, and fully synthesize large amounts of data from multiple sources to build a full comprehension of the story Analyze large data sets, while finding the truth in data, and develop efficient processes for data analysis and simple, elegant visualization Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics Experience building intuitive and actionable dashboards and data visualizations that drive business decisions (Tableau/Power BI/Grafana) The day-to-day Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics What you'll need 3-4 years SQL experience 3-4 years data analysis experience with emphasis in reporting 3-4 years Python experience in data cleansing, statistics, and data visualization packages (i.e. pandas, scikit-learn, matplotlib, seaborn, plotly, etc.) 6-8 years dashboarding experience. Tableau/Power BI/Grafana experience or equivalent with data visualization tools Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution Able to identify stakeholders, build relationships, and influence others to drive progress Excellent analytical and problem solving skills Strong oral and written communication skills Strong statistical background What will help you on the job Strong preference for personal projects and work in Python Data Visualization experience Data Science experience EEO Statement Viasat is proud to be an equal opportunity employer, seeking to create a welcoming and diverse environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, ancestry, physical or mental disability, medical condition, marital status, genetics, age, or veteran status or any other applicable legally protected status or characteristic. If you would like to request an accommodation on the basis of disability for completing this on-line application, please click here.
Posted 1 month ago
0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Exp : 15Yrs to 20yrs Primary skill :- GEN AI Architect, Building GEN AI solutions, Coding, AI ML background, Data engineering, Azure or AWS cloud. Job Description : The Generative Solutions Architect will be responsible for designing and implementing cutting-edge generative AI models and systems. He / She will collaborate with data scientists, engineers, product managers, and other stakeholders to develop innovative AI solutions for various applications including natural language processing (NLP), computer vision, and multimodal learning. This role requires a deep understanding of AI/ML theory, architecture design, and hands-on expertise with the latest generative models. Key Responsibilities : GenAI application conceptualization and design: Understand the use cases under consideration, conceptualization of the application flow, understanding the constraints and designing accordingly to get the most optimized results. Deep knowledge to work on developing and implementing applications using Retrieval-Augmented Generation (RAG)-based models, which combine the power of large language models (LLMs) with information retrieval techniques. Prompt Engineering: Be adept at prompt engineering and its various nuances like one-shot, few shot, chain of thoughts etc and have hands on knowledge of implementing agentic workflow and be aware of agentic AI concepts NLP and Language Model Integration - Apply advanced NLP techniques to preprocess, analyze, and extract meaningful information from large textual datasets. Integrate and leverage large language models such as LLaMA2/3, Mistral or similar offline LLM models to address project-specific goals. Small LLMs / Tiny LLMs: Familiarity and understanding of usage of SLMs / Tiny LLMs like phi3, OpenELM etc and their performance characteristics and usage requirements and nuances of how they can be consumed by use case applications. Collaboration with Interdisciplinary Teams - Collaborate with cross-functional teams, including linguists, developers, and subject matter experts, to ensure seamless integration of language models into the project workflow. Text / Code Generation and Creative Applications - Explore creative applications of large language models, including text / code generation, summarization, and context-aware responses. Skills & Tools Programming Languages - Proficiency in Python for data analysis, statistical modeling, and machine learning. Machine Learning Libraries - Hands-on experience with machine learning libraries such as scikit-learn, Huggingface, TensorFlow, and PyTorch. Statistical Analysis - Strong understanding of statistical techniques and their application in data analysis. Data Manipulation and Analysis - Expertise in data manipulation and analysis using Pandas and NumPy. Database Technologies - Familiarity with vector databases like ChromaDB, Pinecone etc, SQL and Non-SQL databases and experience working with relational and non-relational databases. Data Visualization Tools - Proficient in data visualization tools such as Tableau, Matplotlib, or Seaborn. Familiarity with cloud platforms (AWS, Google Cloud, Azure) for model deployment and scaling. Communication Skills - Excellent communication skills with the ability to convey technical concepts to non-technical audiences.
Posted 1 month ago
5.0 - 8.0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are currently seeking a Senior Python Developer to join our team for an exciting project that involves designing and building RESTful APIs for seamless communication between different components. In this role, you will be responsible for developing and maintaining microservices architecture using containerization tools such as Docker, AWS ECS, and ECR. Additionally, you will be required to demonstrate solutions to cross-functional teams and take ownership of the scope of work for successful project delivery. Responsibilities Develop and maintain microservices architecture using containerization tools such as Docker, AWS ECS, and ECR Design and build RESTful APIs for seamless communication between different components Present and organize demo sessions to demonstrate solutions to cross-functional teams Collaborate with cross-functional teams for successful project delivery Take ownership of the scope of work for successful project delivery Ensure consistency and scalability of applications and dependencies into containers Requirements 5-8 years of experience in software development using Python Proficient in AWS services such as Lambda, DynamoDB, CloudFormation, and IAM Strong experience in designing and building RESTful APIs Expertise in microservices architecture and containerization using Docker, AWS ECS, and ECR Ability to present and organize demo sessions to demonstrate solutions Excellent communication skills and ability to collaborate with cross-functional teams Strong sense of responsibility and ownership over the scope of work Nice to have Experience in DevOps tools such as Jenkins and GitLab for continuous integration and deployment Familiarity with NoSQL databases such as MongoDB and Cassandra Experience in data analysis and visualization using Python libraries such as Pandas and Matplotlib
Posted 1 month ago
5.0 - 8.0 years
0 Lacs
Pune, Maharashtra, India
On-site
We are currently seeking a Senior Python Developer to join our team for an exciting project that involves designing and building RESTful APIs for seamless communication between different components. In this role, you will be responsible for developing and maintaining microservices architecture using containerization tools such as Docker, AWS ECS, and ECR. Additionally, you will be required to demonstrate solutions to cross-functional teams and take ownership of the scope of work for successful project delivery. Responsibilities Develop and maintain microservices architecture using containerization tools such as Docker, AWS ECS, and ECR Design and build RESTful APIs for seamless communication between different components Present and organize demo sessions to demonstrate solutions to cross-functional teams Collaborate with cross-functional teams for successful project delivery Take ownership of the scope of work for successful project delivery Ensure consistency and scalability of applications and dependencies into containers Requirements 5-8 years of experience in software development using Python Proficient in AWS services such as Lambda, DynamoDB, CloudFormation, and IAM Strong experience in designing and building RESTful APIs Expertise in microservices architecture and containerization using Docker, AWS ECS, and ECR Ability to present and organize demo sessions to demonstrate solutions Excellent communication skills and ability to collaborate with cross-functional teams Strong sense of responsibility and ownership over the scope of work Nice to have Experience in DevOps tools such as Jenkins and GitLab for continuous integration and deployment Familiarity with NoSQL databases such as MongoDB and Cassandra Experience in data analysis and visualization using Python libraries such as Pandas and Matplotlib
Posted 1 month ago
4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Senior ML Engineer Minimum 4 to 8+ years of experience in ML development in Product Based Company Location: Bangalore (Onsite) Why should you choose us? Rakuten Symphony is a Rakuten Group company, that provides global B2B services for the mobile telco industry and enables next-generation, cloud-based, international mobile services. Building on the technology Rakuten used to launch Japan’s newest mobile network, we are taking our mobile offering global. To support our ambitions to provide an innovative cloud-native telco platform for our customers, Rakuten Symphony is looking to recruit and develop top talent from around the globe. We are looking for individuals to join our team across all functional areas of our business – from sales to engineering, support functions to product development. Let’s build the future of mobile telecommunications together! Required Skills and Expertise: Candidate Must have exp. Working in Product Based Company. Should be able to Build, train, and optimize deep learning models with TensorFlow, Keras, PyTorch, and Transformers. Should have exp. In Manipulate and analyse large-scale datasets using Python, Pandas, Numpy, Dask Apply advanced fine-tuning techniques (Full Fine-Tuning, PEFT) and strategies to large language and vision models. Implement and evaluate classical machine learning algorithms using scikit-learn, statsmodels, XGBoost etc. Develop and deploy scalable APIs for ML models using FastAPI. Should have exp. In performing data visualization and exploratory data analysis with Matplotlib, Seaborn, Plotly, and Bokeh. Collaborate with cross-functional teams to deliver end-to-end ML solutions. Deploy machine learning models for diverse business applications over the cloud native and on-premise Hands-on experience with Docker for containerization and Kubernetes for orchestration and scalable deployment of ML models. Familiarity with CI/CD pipelines and best practices for deploying and monitoring ML models in production. Stay current with the latest advancements in machine learning, deep learning, and AI. Our commitment to you: - Rakuten Group’s mission is to contribute to society by creating value through innovation and entrepreneurship. By providing high-quality services that help our users and partners grow, - We aim to advance and enrich society. - To fulfill our role as a Global Innovation Company, we are committed to maximizing both corporate and shareholder value. RAKUTEN SHUGI PRINCIPLES: Our worldwide practices describe specific behaviours that make Rakuten unique and united across the world. We expect Rakuten employees to model these 5 Shugi Principles of Success. Always improve, always advance . Only be satisfied with complete success - Kaizen. Be passionately professional . Take an uncompromising approach to your work and be determined to be the best. Hypothesize - Practice - Validate - Shikumika. Use the Rakuten Cycle to success in unknown territory. Maximize Customer Satisfaction . The greatest satisfaction for workers in a service industry is to see their customers smile. Speed!! Speed!! Speed!! Always be conscious of time. Take charge, set clear goals, and engage your team.
Posted 1 month ago
0.0 - 3.0 years
0 Lacs
BTM Layout, Bengaluru, Karnataka
On-site
Job Title: Python Developer – Machine Learning & AI (2–3 Years Experience) Job Summary: We are seeking a skilled and motivated Python Developer with 2 to 3 years of experience in Machine Learning and Artificial Intelligence. The ideal candidate will have hands-on experience in developing, training, and deploying machine learning models, and should be proficient in Python and associated data science libraries. You will work with our data science and engineering teams to build intelligent solutions that solve real-world problems. Key Responsibilities: Develop and maintain machine learning models using Python. Work on AI-driven applications, including predictive modeling, natural language processing, and computer vision (based on project requirements). Collaborate with cross-functional teams to understand business requirements and translate them into ML solutions. Preprocess, clean, and transform data for training and evaluation. Perform model training, tuning, evaluation, and deployment using tools like scikit-learn, TensorFlow, or PyTorch. Write modular, efficient, and testable code. Document processes, models, and experiments clearly for team use and future reference. Stay updated with the latest trends and advancements in AI and machine learning. Required Skills: 2–3 years of hands-on experience with Python programming. Solid understanding of machine learning algorithms (supervised, unsupervised, and reinforcement learning). Experience with libraries such as scikit-learn , pandas , NumPy , Matplotlib , and Seaborn . Exposure to deep learning frameworks like TensorFlow , Keras , or PyTorch . Good understanding of data structures and algorithms. Experience with model evaluation techniques and performance metrics. Familiarity with Jupyter Notebooks, version control (Git), and cloud platforms (AWS, GCP, or Azure) is a plus. Strong analytical and problem-solving skills. Preferred Qualifications: Bachelor’s or Master’s degree in Computer Science, Data Science, Mathematics, or related field. Experience with deploying ML models using Flask , FastAPI , or Docker . Knowledge of MLOps and model lifecycle management is an advantage. Understanding of NLP or Computer Vision is a plus. Job Type: Full-time Pay: Up to ₹700,000.00 per year Benefits: Health insurance Schedule: Day shift Monday to Friday Ability to commute/relocate: BTM Layout, Bengaluru, Karnataka: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Solid understanding of machine learning algorithms (supervised, unsupervised, and reinforcement learning). Experience with libraries such as scikit-learn, pandas, NumPy, Matplotlib, and Seaborn. Exposure to deep learning frameworks like TensorFlow, Keras, or PyTorch. Familiarity with Jupyter Notebooks, version control (Git), and cloud platforms (AWS, GCP, or Azure) is a plus. Experience with deploying ML models using Flask, FastAPI, or Docker. what is your CTC ( in lpa ) What is your Expected CTC ( in lpa ) what is your notice period Location: BTM Layout, Bengaluru, Karnataka (Required) Work Location: In person Application Deadline: 06/07/2025
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
About Company Papigen is a fast-growing global technology services company, delivering innovative digital solutions through deep industry experience and cutting-edge expertise. We specialize in technology transformation, enterprise modernization, and dynamic areas like Cloud, Big Data, Java, React, DevOps, and more. Our client-centric approach combines consulting, engineering, and data science to help businesses evolve and scale efficiently. About The Role We are looking for a Python Full Stack Developer with strong Azure DevOps and AI integration expertise to support the automation of Kanban workflows and real-time analytics in a scaled agile environment. You will design end-to-end automation for case management, build performance dashboards, and integrate AI-powered solutions using Azure OpenAI, Dataverse, and Power BI. The role requires a deep understanding of Python development, experience with Azure services, and the ability to collaborate with cross-functional teams to deliver high-quality solutions. Key Responsibilities Develop Python applications to automate Kanban case management integrated with Azure DevOps (ADO) Build and maintain REST APIs with access control for project and workload metrics Integrate Azure OpenAI services to automate delay analysis and generate custom summaries Design interactive dashboards using Python libraries (Pandas, Plotly, Dash) and Power BI Store, manage, and query data using Dataverse for workflow reporting and updates Leverage Microsoft Graph API and Azure SDKs for system integration and access control Collaborate with IT security, PMs, and engineering teams to gather requirements and deliver automation solutions Continuously improve security workflows, report generation, and system insights using AI and data modeling Required Skills & Experience 5+ years of Python development experience with FastAPI or Flask Hands-on experience with Azure DevOps, including its REST APIs Proficiency in Azure OpenAI, Azure SDKs, and Microsoft Graph API Strong understanding of RBAC (Role-Based Access Control) and permissions management Experience with Power BI, Dataverse, and Python data visualization libraries (Matplotlib, Plotly, Dash) Prior experience in Agile teams and familiarity with Scrum/Kanban workflows Excellent communication and documentation skills; able to explain technical concepts to stakeholders Benefits And Perks Opportunity to work with leading global clients Flexible work arrangements with remote options Exposure to modern technology stacks and tools Supportive and collaborative team environment Continuous learning and career development opportunities Skills: pandas,azure sdks,microsoft graph api,rest apis,ai integration,microsoft power bi,power bi,dash,python,azure devops,fastapi,dataverse,rbac,azure openai,plotly,flask,matplotlib
Posted 1 month ago
5.0 years
0 Lacs
India
Remote
For an international project in Chennai, we are urgently looking for a Full Remote Python Full Stack Developer. We are looking for a motivated contractor. Candidates need to be fluent in English. Tasks and responsibilities: Write Python programs to perform automated ADO Kanban case management solution, dashboards and reports; Develop and integrate Rest APIs with access control to provide case status and reports to specific to LOBs, Managers, etc.; Utilize Azure OpenAI frameworks to enhance delay analysis, vulnerability dashboard and reporting; Build dashboards using Python libraries (e.g., Pandas, Matplotlib, Plotly) to track case status from Kanban boards, delay per project/LOB/etc., Use Dataverse and PowerBi for data modelling and reporting as well; Collaboration and Support: Work closely with project managers, IT security staff, and system administrators to gather requirements, understand business needs, and develop solutions that improve security processes; Continuously evaluate and improve Kanban case management solution, leveraging new technologies and techniques, particularly AI and automation, to improve efficiency and effectiveness; Profile: Bachelor or Master degree; +5 years of hands-on experience with Python, particularly in frameworks like FastAPI, Flask, and experience using Azure OpenAI frameworks; Strong understanding of access control models such as Role-Based Access Control (RBAC); Expertise working in Azure DevOps and its Rest APIs for customizing it; Proficiency with Azure cloud services, Microsoft Graph API, and experience integrating Python applications; Experience in Dataverse, Power BI and reporting libraries in Python (Pandas, Matplotlib, Plotly, Dash) to build dashboards and reports; Ability to collaborate with various stakeholders, explain complex technical solutions, and deliver high-quality solutions on time; Experience working in Agile environments and familiarity with Scrum and Kanban methodologies for delivering the solutions; Fluent in English;
Posted 1 month ago
6.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Security represents the most critical priorities for our customers in a world awash in digital threats, regulatory scrutiny, and estate complexity. Microsoft Security aspires to make the world a safer place for all. We want to reshape security and empower every user, customer, and developer with a security cloud that protects them with end to end, simplified solutions. The Microsoft Security organization accelerates Microsoft’s mission and bold ambitions to ensure that our company and industry is securing digital technology platforms, devices, and clouds in our customers’ heterogeneous environments, as well as ensuring the security of our own internal estate. Our culture is centered on embracing a growth mindset, a theme of inspiring excellence, and encouraging teams and leaders to bring their best each day. In doing so, we create life-changing innovations that impact billions of lives around the world. The Defender Experts (DEX) Research team is at the forefront of Microsoft’s threat protection strategy, combining world-class hunting expertise with AI-driven analytics to protect customers from advanced cyberattacks. Our mission is to move protection left—disrupting threats early, before damage occurs—by transforming raw signals into intelligence that powers detection, disruption, and customer trust. We’re looking for a passionate and curious Data Scientist to join this high-impact team. In this role, you'll partner with researchers, hunters, and detection engineers to explore attacker behavior, operationalize entity graphs, and develop statistical and ML-driven models that enhance DEX’s detection efficacy. Your work will directly feed into real-time protections used by thousands of enterprises and shape the future of Microsoft Security. This is an opportunity to work on problems that matter—with cutting-edge data, a highly collaborative team, and the scale of Microsoft behind you. Microsoft’s mission is to empower every person and every organization on the planet to achieve more. As employees we come together with a growth mindset, innovate to empower others, and collaborate to realize our shared goals. Each day we build on our values of respect, integrity, and accountability to create a culture of inclusion where everyone can thrive at work and beyond. Responsibilities Understand complex cybersecurity and business problems, translate them into well-defined data science problems, and build scalable solutions. Design and build robust, large-scale graph structures to model security entities, behaviors, and relationships. Develop and deploy scalable, production-grade AI/ML systems and intelligent agents for real-time threat detection, classification, and response. Collaborate closely with Security Research teams to integrate domain knowledge into data science workflows and enrich model development. Drive end-to-end ML lifecycle: from data ingestion and feature engineering to model development, evaluation, and deployment. Work with large-scale graph data: create, query, and process it efficiently to extract insights and power models. Lead initiatives involving Graph ML, Generative AI, and agent-based systems, driving innovation across threat detection, risk propagation, and incident response. Collaborate closely with engineering and product teams to integrate solutions into production platforms. Mentor junior team members and contribute to strategic decisions around model architecture, evaluation, and deployment. Qualifications Bachelor’s or Master’s degree in Computer Science, Statistics, Applied Mathematics, Data Science, or a related quantitative field 6+ years of experience applying data science or machine learning in a real-world setting, preferably in security, fraud, risk, or anomaly detection Proficiency in Python and/or R, with hands-on experience in data manipulation (e.g., Pandas, NumPy), modeling (e.g., scikit-learn, XGBoost), and visualization (e.g., matplotlib, seaborn) Strong foundation in statistics, probability, and applied machine learning techniques Experience working with large-scale datasets, telemetry, or graph-structured data Ability to clearly communicate technical insights and influence cross-disciplinary teams Demonstrated ability to work independently, take ownership of problems, and drive solutions end-to-end Microsoft is an equal opportunity employer. Consistent with applicable law, all qualified applicants will receive consideration for employment without regard to age, ancestry, citizenship, color, family or medical care leave, gender identity or expression, genetic information, immigration status, marital status, medical condition, national origin, physical or mental disability, political affiliation, protected veteran or military status, race, ethnicity, religion, sex (including pregnancy), sexual orientation, or any other characteristic protected by applicable local laws, regulations and ordinances. If you need assistance and/or a reasonable accommodation due to a disability during the application process, read more about requesting accommodations.
Posted 1 month ago
1.0 - 3.0 years
10 - 20 Lacs
Chandigarh
Work from Office
About the Role: We are seeking a highly motivated AI/ML Engineer with 13 years of experience to join a fast-growing product-based team in Chandigarh. You will be part of a talented group working on building AI-powered solutions, predictive models, and intelligent automation tools for real-world business applications. Key Responsibilities: Design and implement machine learning models and algorithms Work with data teams to preprocess and clean datasets Train, evaluate, and fine-tune models for classification, regression, and clustering problems Collaborate with product and engineering teams to integrate ML models into production Conduct research on state-of-the-art AI/ML trends and apply best practices Document model performance, experiments, and key metrics Required Skills: 1-3 years of hands-on experience in AI/ML model development Proficient in Python and ML libraries (e.g., scikit-learn, TensorFlow, Keras, PyTorch) Solid understanding of data structures, algorithms, and ML concepts Experience with Pandas, NumPy, Matplotlib, etc. Good understanding of model evaluation, overfitting/underfitting, cross-validation Strong problem-solving and communication skills Good to Have: Experience with NLP, Computer Vision, or Deep Learning Exposure to MLOps or cloud platforms (AWS, GCP, Azure) Familiarity with Flask or FastAPI for deploying ML models Version control tools like Git Why Join Us? Work on real-world AI applications in a fast-paced environment Collaborate with a tech-driven and passionate team Excellent growth opportunities and career progression Competitive compensation and flexible work culture
Posted 1 month ago
0 years
0 Lacs
India
Remote
Location: Remote (Work from Home) Duration: 3–6 Months Stipend: Performance-Based Upto 15,000 Company: Zeno Talent Department: Artificial Intelligence & Data Science PPO Opportunity: Yes – High-performing interns will be offered a Pre-Placement Offer (PPO) for a full-time role About Zeno Talent: Zeno Talent is a dynamic IT services and consulting company that delivers advanced technology solutions across domains like Data Science, Artificial Intelligence, ERP, and IT Consulting. Our mission is to connect talent with opportunity while solving real-world business problems using cutting-edge technologies. We value innovation, learning, and professional growth. Job Description: We are seeking a passionate and motivated AI Intern (Remote) to join our Artificial Intelligence & Data Science team. You will work on real-time AI/ML projects, gaining hands-on experience and professional mentorship. This internship is ideal for someone looking to launch their career in AI and grow within a supportive, fast-paced environment. Outstanding interns will receive a Pre-Placement Offer (PPO) for a full-time role at Zeno Talent. Key Responsibilities: Assist in building, training, and fine-tuning machine learning models Clean, preprocess, and analyze datasets from real-world applications Support development of AI solutions using Python and relevant libraries Collaborate with mentors and team members to contribute to live projects Document technical work and report progress regularly Research and stay updated on new AI trends and tools Eligibility & Skills: Currently pursuing or recently completed a degree in Computer Science, Data Science, AI, or related field Solid foundation in Python and libraries like NumPy, Pandas, Scikit-learn Basic understanding of machine learning algorithms Familiarity with data visualization tools (e.g., Matplotlib, Seaborn) Strong problem-solving and analytical skills Willingness to learn, adapt, and take initiative in a remote team environment Bonus (Good to Have): Experience with Git and GitHub Exposure to NLP, deep learning, or computer vision Participation in AI projects, competitions, or hackathons What You’ll Gain: Real-world experience working on live AI projects One-on-one mentorship from experienced professionals Letter of Recommendation & Internship Certificate PPO (Pre-Placement Offer) opportunity for top performers Career guidance and resume/project review sessions
Posted 1 month ago
10.0 - 15.0 years
30 - 35 Lacs
Hyderabad
Work from Office
Define, Design, and Build an optimal data pipeline architecture to collect data from a variety of sources, cleanse, and organize data in SQL & NoSQL destinations (ELT & ETL Processes). Define and Build business use case-specific data models that can be consumed by Data Scientists and Data Analysts to conduct discovery and drive business insights and patterns. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. Build and deploy analytical models and tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs. Define, Design, and Build Executive dashboards and reports catalogs to serve decision-making and insight generation needs. Provide inputs to help keep data separated and secure across data centers on-prem and private and public cloud environments. Create data tools for analytics and data science team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Implement scheduled data load process and maintain and manage the data pipelines. Troubleshoot, investigate, and fix failed data pipelines and prepare RCA. Experience with a mix of the following Data Engineering Technologies Python, Spark, Snowflake, Databricks, Hadoop (CDH), Hive, Sqoop, oozie SQL Postgres, MySQL, MS SQL Server Azure ADF, Synapse Analytics, SQL Server, ADLS G2 AWS Redshift, EMR cluster, S3 Experience with a mix of the following Data Analytics and Visualization toolsets SQL, PowerBI, Tableau, Looker, Python, R Python libraries -- Pandas, Scikit-learn, Seaborn, Matplotlib, TF, Stat-Models, PySpark, Spark-SQL, R, SAS, Julia, SPSS, Azure Synapse Analytics, Azure ML studio, Azure Auto ML
Posted 1 month ago
1.0 - 2.0 years
3 - 6 Lacs
Hyderabad
Work from Office
Define, Design, and Build an optimal data pipeline architecture to collect data from a variety of sources, cleanse, and organize data in SQL & NoSQL destinations (ELT & ETL Processes). Define and Build business use case-specific data models that can be consumed by Data Scientists and Data Analysts to conduct discovery and drive business insights and patterns. Identify, design, and implement internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS big data technologies. Build and deploy analytical models and tools that utilize the data pipeline to provide actionable insights into customer acquisition, operational efficiency, and other key business performance metrics. Work with stakeholders including the Executive, Product, Data, and Design teams to assist with data-related technical issues and support their data infrastructure needs. Define, Design, and Build Executive dashboards and reports catalogs to serve decision-making and insight generation needs. Provide inputs to help keep data separated and secure across data centers - on-prem and private and public cloud environments. Create data tools for analytics and data science team members that assist them in building and optimizing our product into an innovative industry leader. Work with data and analytics experts to strive for greater functionality in our data systems. Implement scheduled data load process and maintain and manage the data pipelines. Troubleshoot, investigate, and fix failed data pipelines and prepare RCA. Experience with a mix of the following Data Engineering Technologies Python, Spark, Snowflake, Databricks, Hadoop (CDH), Hive, Sqoop, oozie SQL - Postgres, MySQL, MS SQL Server Azure - ADF, Synapse Analytics, SQL Server, ADLS G2 AWS - Redshift, EMR cluster, S3 Experience with a mix of the following Data Analytics and Visualization toolsets SQL, PowerBI, Tableau, Looker, Python, R Python libraries -- Pandas, Scikit-learn, Seaborn, Matplotlib, TF, Stat-Models, PySpark, Spark-SQL, R, SAS, Julia, SPSS, Azure - Synapse Analytics, Azure ML studio, Azure Auto ML
Posted 1 month ago
15.0 - 20.0 years
9 - 14 Lacs
Bengaluru
Work from Office
Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Large Language Models Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an AI / ML Engineer, you will engage in the development of applications and systems that leverage artificial intelligence tools and cloud AI services. Your typical day will involve designing and implementing production-ready solutions, ensuring that they meet quality standards. You will work with various AI models, including generative AI, deep learning, and neural networks, while also exploring innovative applications such as chatbots and image processing. Collaboration with cross-functional teams will be essential to integrate these advanced technologies into effective solutions that address real-world challenges. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge.- Continuously evaluate and improve existing processes and systems to optimize performance. Professional & Technical Skills: - Must To Have Skills: Proficiency in Large Language Models.- Good To Have Skills: Experience with cloud-based AI services.- Strong understanding of deep learning frameworks such as TensorFlow or PyTorch.- Familiarity with natural language processing techniques and tools.- Experience in developing and deploying chatbots and conversational agents. Additional Information:- The candidate should have minimum 5 years of experience in Large Language Models.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As an Associate Data Scientist at IBM, you will work to solve business problems using leading edge and open-source tools such as Python, R, and TensorFlow, combined with IBM tools and our AI application suites. You will prepare, analyze, and understand data to deliver insight, predict emerging trends, and provide recommendations to stakeholders. In Your Role, You May Be Responsible For Implementing and validating predictive and prescriptive models and creating and maintaining statistical models with a focus on big data & incorporating machine learning. techniques in your projects Writing programs to cleanse and integrate data in an efficient and reusable manner Working in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors Communicating with internal and external clients to understand and define business needs and appropriate modelling techniques to provide analytical solutions. Evaluating modelling results and communicating the results to technical and non-technical audiences Preferred Education Master's Degree Required Technical And Professional Expertise Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides Preferred Technical And Professional Experience Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms Experience and working knowledge in COBOL & JAVA would be preferred
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities An AI Data Scientist at IBM is not just a job title - it’s a mindset. You’ll leverage the watsonx,AWS Sagemaker,Azure Open AI platform to co-create AI value with clients, focusing on technology patterns to enhance repeatability and delight clients. We are seeking an experienced and innovative AI Data Scientist to be specialized in foundation models and large language models. In this role, you will be responsible for architecting and delivering AI solutions using cutting-edge technologies, with a strong focus on foundation models and large language models. You will work closely with customers, product managers, and development teams to understand business requirements and design custom AI solutions that address complex challenges. Experience with tools like Github Copilot, Amazon Code Whisperer etc. is desirable. Success is our passion, and your accomplishments will reflect this, driving your career forward, propelling your team to success, and helping our clients to thrive. Day-to-Day Duties Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Documentation and Knowledge Sharing: Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides. Contribute to internal knowledge sharing initiatives and mentor new team members. Industry Trends and Innovation: Stay up to date with the latest trends and advancements in AI, foundation models, and large language models. Evaluate emerging technologies, tools, and frameworks to assess their potential impact on solution design and implementation Preferred Education Master's Degree Required Technical And Professional Expertise Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms (e.g. Kubernetes, AWS, Azure, GCP) and related services is a plus. Experience and working knowledge in COBOL & JAVA would be preferred Having experience in Code generation, code matching & code translation leveraging LLM capabilities would be a Big plus (e.g. Amazon Code Whisperer, Github Copilot etc.) * Soft Skills: Excellent interpersonal and communication skills. Engage with stakeholders for analysis and implementation. Commitment to continuous learning and staying updated with advancements in the field of AI. Growth mindset: Demonstrate a growth mindset to understand clients' business processes and challenges. Experience in python and pyspark will be added advantage Preferred Technical And Professional Experience Experience: Proven experience in designing and delivering AI solutions, with a focus on foundation models, large language models, exposure to open source, or similar technologies. Experience in natural language processing (NLP) and text analytics is highly desirable. Understanding of machine learning and deep learning algorithms. Strong track record in scientific publications or open-source communities Experience in full AI project lifecycle, from research and prototyping to deployment in production environments
Posted 1 month ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities In your role, you may be responsible for: Implementing and validating predictive and prescriptive models and creating and maintaining statistical models with a focus on big data & incorporating machine learning. techniques in your projects Writing programs to cleanse and integrate data in an efficient and reusable manner Working in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours Communicating with internal and external clients to understand and define business needs and appropriate modelling techniques to provide analytical solutions. Evaluating modelling results and communicating the results to technical and non-technical audiences Preferred Education Master's Degree Required Technical And Professional Expertise Proof of Concept (POC) Development: Develop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Document solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides Preferred Technical And Professional Experience Strong programming skills, with proficiency in Python and experience with AI frameworks such as TensorFlow, PyTorch, Keras or Hugging Face. Understanding in the usage of libraries such as SciKit Learn, Pandas, Matplotlib, etc. Familiarity with cloud platforms Experience and working knowledge in COBOL & JAVA would be preferred Experience in python and pyspark will be added advantage
Posted 1 month ago
35.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
About Us One team. Global challenges. Infinite opportunities. At Viasat, we’re on a mission to deliver connections with the capacity to change the world. For more than 35 years, Viasat has helped shape how consumers, businesses, governments and militaries around the globe communicate. We’re looking for people who think big, act fearlessly, and create an inclusive environment that drives positive impact to join our team. What You'll Do Parse and manipulate raw data leveraging tools including R, Python, Tableau, with a strong preference for Python Ingest, understand, and fully synthesize large amounts of data from multiple sources to build a full comprehension of the story Analyze large data sets, while finding the truth in data, and develop efficient processes for data analysis and simple, elegant visualization Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics Experience building intuitive and actionable dashboards and data visualizations that drive business decisions (Tableau/Power BI/Grafana) The day-to-day Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics What You'll Need 3-4 years SQL experience 3-4 years data analysis experience with emphasis in reporting 3-4 years Python experience in data cleansing, statistics, and data visualization packages (i.e. pandas, scikit-learn, matplotlib, seaborn, plotly, etc.) 6-8 years dashboarding experience. Tableau/Power BI/Grafana experience or equivalent with data visualization tools Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution Able to identify stakeholders, build relationships, and influence others to drive progress Excellent analytical and problem solving skills Strong oral and written communication skills Strong statistical background What Will Help You On The Job Strong preference for personal projects and work in Python Data Visualization experience Data Science experience EEO Statement Viasat is proud to be an equal opportunity employer, seeking to create a welcoming and diverse environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, ancestry, physical or mental disability, medical condition, marital status, genetics, age, or veteran status or any other applicable legally protected status or characteristic. If you would like to request an accommodation on the basis of disability for completing this on-line application, please click here.
Posted 1 month ago
35.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About Us One team. Global challenges. Infinite opportunities. At Viasat, we’re on a mission to deliver connections with the capacity to change the world. For more than 35 years, Viasat has helped shape how consumers, businesses, governments and militaries around the globe communicate. We’re looking for people who think big, act fearlessly, and create an inclusive environment that drives positive impact to join our team. What You'll Do Parse and manipulate raw data leveraging tools including R, Python, Tableau, with a strong preference for Python Ingest, understand, and fully synthesize large amounts of data from multiple sources to build a full comprehension of the story Analyze large data sets, while finding the truth in data, and develop efficient processes for data analysis and simple, elegant visualization Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics Experience building intuitive and actionable dashboards and data visualizations that drive business decisions (Tableau/Power BI/Grafana) The day-to-day Develop and automate daily, monthly, quarterly reporting for multiple business areas within Viasat Identifies data gaps, researches methods to fill these gaps and provide recommendations Gather and analyze facts and devise solutions to administrative problems Monitor big data with Business Intelligence tools, simulation, modeling, and statistics What You'll Need 3-4 years SQL experience 3-4 years data analysis experience with emphasis in reporting 3-4 years Python experience in data cleansing, statistics, and data visualization packages (i.e. pandas, scikit-learn, matplotlib, seaborn, plotly, etc.) 6-8 years dashboarding experience. Tableau/Power BI/Grafana experience or equivalent with data visualization tools Excellent judgment, critical-thinking, and decision-making skills; can balance attention to detail with swift execution Able to identify stakeholders, build relationships, and influence others to drive progress Excellent analytical and problem solving skills Strong oral and written communication skills Strong statistical background What Will Help You On The Job Strong preference for personal projects and work in Python Data Visualization experience Data Science experience EEO Statement Viasat is proud to be an equal opportunity employer, seeking to create a welcoming and diverse environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, ancestry, physical or mental disability, medical condition, marital status, genetics, age, or veteran status or any other applicable legally protected status or characteristic. If you would like to request an accommodation on the basis of disability for completing this on-line application, please click here.
Posted 1 month ago
0 years
0 Lacs
Hyderābād
On-site
Job description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Marketing Title. In this role, you will: Develop automation utilities and scripts using Python to streamline workflows and processes. Perform data analysis to extract meaningful insights from structured and unstructured datasets. Create data views and dashboards based on analysis results to support decision-making. Design and implement visualizations using libraries like Matplotlib, Seaborn, or Plotly. Collaborate with cross-functional teams to gather requirements and deliver tailored solutions. Experience with frameworks like Flask or Django for building web-based utilities. Ensure code quality through unit testing, integration testing, and adherence to best practices. Document technical designs, processes, and solutions for future reference. Requirements To be successful in this role, you should meet the following requirements: Proficiency in Python programming with experience in developing scalable utilities and automation scripts. Strong knowledge of data analysis techniques and tools (e.g., Pandas, NumPy). Experience with data visualization libraries (e.g., Matplotlib, Seaborn, Plotly). Knowledge of REST APIs and integration with external systems. Understanding of software development lifecycle (SDLC) and Agile methodologies. Strong verbal and written communication skills. You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 month ago
3.0 years
5 - 7 Lacs
Gurgaon
Remote
Job Summary: We are seeking a dynamic RPA & Data Automation Developer with 3+ years of hands-on experience in building automated workflows, data pipelines, and API-based integrations. The role demands strong analytical skills, advanced scripting capabilities in Python, experience with RPA tools like Power Automate, and solid SQL knowledge for backend automation. Key Responsibilities: Design, develop, and maintain RPA solutions using Python, Selenium, and Power Automate. Automate business processes using scripts and bots that interact with Excel, browsers, databases, and APIs. Work extensively with Python libraries including Pandas, NumPy, Matplotlib, re (regex), smtp, and FastAPI. Create and consume RESTful APIs for data services and automation endpoints. Perform complex data analysis and transformation using Pandas and SQL queries. Write and maintain SQL components such as stored procedures, views, functions, and perform schema design and query optimization. Automate data flows across platforms including Excel, emails, and databases using VBA macros and Power Automate flows. Implement exception handling, logging, and monitoring mechanisms for all automation processes. Collaborate with business teams to understand workflows, bottlenecks, and identify automation opportunities. Required Technical Skills: Python & Automation: Pandas, NumPy, Matplotlib Selenium (for browser automation) FastAPI (for microservices/API creation) Regular Expressions (regex) SMTP libraries (email automation) JSON/CSV/XML parsing RPA Tools: Power Automate (desktop flows and cloud flows) VBA (for Excel-based automation) Database: MS SQL Server: T-SQL, stored procedures (sp), functions, views Schema design and performance optimization Preferred Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field Experience working with multiple data sources and formats Familiarity with ClickUp or other versioning and task tracking tools Exposure to cloud platforms (Azure, AWS) is a plus What We Offer: Exposure to advanced RPA & data engineering projects Collaborative, tech-first environment Career growth opportunities across automation and analytics domains Competitive compensation Job Types: Full-time, Permanent Pay: ₹500,000.00 - ₹700,000.00 per year Benefits: Flexible schedule Paid sick time Paid time off Provident Fund Work from home Schedule: Day shift Work Location: In person
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France