Home
Jobs
Companies
Resume

779 Matplotlib Jobs - Page 7

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Project Role : AI / ML Engineer Project Role Description : Develops applications and systems that utilize AI tools, Cloud AI services, with proper cloud or on-prem application pipeline with production ready quality. Be able to apply GenAI models as part of the solution. Could also include but not limited to deep learning, neural networks, chatbots, image processing. Must have skills : Machine Learning Good to have skills : NA Minimum 3 Year(s) Of Experience Is Required Educational Qualification : 15 years full time education Summary: These roles have many overlapping skills with GENAI Engineers and architects. Description may scaleup/scale down based on expected seniority. Roles & Responsibilities: -Implement generative AI models, identify insights that can be used to drive business decisions. Work closely with multi-functional teams to understand business problems, develop hypotheses, and test those hypotheses with data, collaborating with cross-functional teams to define AI project requirements and objectives, ensuring alignment with overall business goals. -Conducting research to stay up-to-date with the latest advancements in generative AI, machine learning, and deep learning techniques and identify opportunities to integrate them into our products and services. -Optimizing existing generative AI models for improved performance, scalability, and efficiency. -Ensure data quality and accuracy -Leading the design and development of prompt engineering strategies and techniques to optimize the performance and output of our GenAI models. -Implementing cutting-edge NLP techniques and prompt engineering methodologies to enhance the capabilities and efficiency of our GenAI models. -Determining the most effective prompt generation processes and approaches to drive innovation and excellence in the field of AI technology, collaborating with AI researchers and developers -Experience working with cloud based platforms (example: AWS, Azure or related) -Strong problem-solving and analytical skills -Proficiency in handling various data formats and sources through Omni Channel for Speech and voice applications, part of conversational AI -Prior statistical modelling experience -Demonstrable experience with deep learning algorithms and neural networks -Developing clear and concise documentation, including technical specifications, user guides, and presentations, to communicate complex AI concepts to both technical and non-technical stakeholders. -Contributing to the establishment of best practices and standards for generative AI development within the organization. Professional & Technical Skills: -Must have solid experience developing and implementing generative AI models, with a strong understanding of deep learning techniques such as GPT, VAE, and GANs. -Must be proficient in Python and have experience with machine learning libraries and frameworks such as TensorFlow, PyTorch, or Keras. -Must have strong knowledge of data structures, algorithms, and software engineering principles. -Must be familiar with cloud-based platforms and services, such as AWS, GCP, or Azure. -Need to have experience with natural language processing (NLP) techniques and tools, such as SpaCy, NLTK, or Hugging Face. -Must be familiar with data visualization tools and libraries, such as Matplotlib, Seaborn, or Plotly. -Need to have knowledge of software development methodologies, such as Agile or Scrum. -Possess excellent problem-solving skills, with the ability to think critically and creatively to develop innovative AI solutions. Additional Information: -Must have a degree in Computer Science, Artificial Intelligence, Machine Learning, or a related field. A Ph.D. is highly desirable. -strong communication skills, with the ability to effectively convey complex technical concepts to a diverse audience. -You possess a proactive mindset, with the ability to work independently and collaboratively in a fast-paced, dynamic environment. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Step into the world of AI innovation with the Deccan AI Experts Community (By Soul AI), where you become a creator, not just a consumer. We are reaching out to the top 1% of Soul AI’s Data Visualization Engineers like YOU for a unique job opportunity to work with the industry leaders. What’s in it for you? pay above market standards The role is going to be contract based with project timelines from 2 - 6 months, or freelancing. Be a part of an Elite Community of professionals who can solve complex AI challenges. Work location could be: Remote Onsite on client location: US, UAE, UK, India etc. Deccan AI’s Office: Hyderabad or Bangalore Responsibilities: Architect and implement enterprise-level BI solutions to support strategic decision-making along with data democratization by enabling self-service analytics for non-technical users. Lead data governance and data quality initiatives to ensure consistency and design data pipelines and automated reporting solutions using SQL and Python. Optimize big data queries and analytics workloads for cost efficiency and Implement real-time analytics dashboards and interactive reports. Mentor junior analysts and establish best practices for data visualization. Required Skills: Advanced SQL, Python (Pandas, NumPy), and BI tools (Tableau, Power BI, Looker). Expertise in AWS (Athena, Redshift), GCP (Big Query), or Snowflake. Experience with data governance, lineage tracking, and big data tools (Spark, Kafka). Exposure to machine learning and AI-powered analytics. Nice to Have: Experience with graph analytics, geospatial data, and visualization libraries (D3.js, Plotly). Hands-on experience with BI automation and AI-driven analytics. Who can be a part of the community? We are looking for top-tier Data Visualization Engineers with expertise in analyzing and visualizing complex datasets. Proficiency in SQL, Tableau, Power BI, and Python (Pandas, NumPy, Matplotlib) is a plus. If you have experience in this field then this is your chance to collaborate with industry leaders. What are the next steps? 1. Register on our Soul AI website. 2. Our team will review your profile. 3. Clear all the screening rounds: Clear the assessments once you are shortlisted. As soon as you qualify all the screening rounds (assessments, interviews) you will be added to our Expert Community! 4. Profile matching: Be patient while we align your skills and preferences with the available project. 5 . Project Allocation: You’ll be deployed on your preferred project! Skip the Noise. Focus on Opportunities Built for You! Show more Show less

Posted 6 days ago

Apply

5.0 - 7.0 years

5 - 9 Lacs

Chennai

Work from Office

Naukri logo

Python Software Development Sr.specialist In these roles, you will be responsible for Design, implement, and test generative AI models using python and various frameworks such as Pandas, TensorFlow, PyTorch, and OpenAI. Research and explore new techniques and applications of generative AI, such as text, image, audio, and video synthesis, style transfer, data augmentation, and anomaly detection. Collaborate with other developers, researchers, and stakeholders to deliver high-quality and innovative solutions. Document and communicate the results and challenges of generative AI projects. Required Skills for this role include: Technical skills 5 to 7 years"™ Experience in developing Python frameworks such DL, ML, Flask At least 2 years of experience in developing generative AI models using python and relevant frameworks. Strong knowledge of machine learning, deep learning, and generative AI concepts and algorithms. Proficient in python and common libraries such as numpy, pandas, matplotlib, and scikit-learn. Familiar with version control, testing, debugging, and deployment tools. Excellent communication and problem-solving skills. Curious and eager to learn new technologies and domains. Desired Skills: Knowledge of Django, Web API Proficient exposure on MVC. Preferences: Graduate degree in Computer Science with 4 years of Python based development. Gen AI Framework Professional Certification

Posted 6 days ago

Apply

2.0 - 3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Job Details: We are seeking a highly motivated and enthusiastic Junior Data Scientist with 2-3 years of experience to join our data science team. This role offers an exciting opportunity to contribute to both traditional Machine Learning projects for our commercial IoT platform (EDGE Live) and cutting-edge Generative AI initiatives. Position: Data Scientist Division & Department: Enabling Functions_Business Technology Group (BTG) Reporting To: Customer & Commercial Experience Products Leader Educational Qualifications Bachelor's degree in Mechanical, Computer Science, Data Science, Mathematics, or a related field. Experience 2-3 years of hands-on experience with machine learning Exposure to Generative AI concepts and techniques, such as Large Language Models (LLMs), RAG Architecture Experience in manufacturing and with an IoT platform is preferable Role And Responsibilities Key Objectives Machine Learning (ML) Assist in the development and implementation of machine learning models using frameworks such as TensorFlow, PyTorch, or scikit-learn. Help with Python development to integrate models with the overall application Monitor and evaluate model performance using appropriate metrics and techniques. Generative AI Build Gen AI-based tools for various business use cases by fine-tuning and adapting pre-trained generative models Support the exploration and experimentation with Generative AI models Research & Learning Stay up-to-date with the latest advancements and help with POCs Proactively research and propose new techniques and tools to improve our data science capabilities. Collaboration And Communication Work closely with cross-functional teams, including product managers, engineers, and business stakeholders, to understand requirements and deliver impactful solutions. Communicate findings, model performance, and technical concepts to both technical and non-technical audiences. Technical Competencies Programming: Proficiency in Python, with experience in libraries like numpy, pandas, and matplotlib for data manipulation and visualization. ML Frameworks: Experience with TensorFlow, PyTorch, or scikit-learn. Cloud & Deployment: Basic understanding of cloud platforms such as Databricks, Google Cloud Platform (GCP), or Microsoft Azure for model deployment. Data Processing & Evaluation: Knowledge of data preprocessing, feature engineering, and evaluation metrics such as accuracy, F1-score, and RMSE. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Praxair India Private Limited | Business Area: Digitalisation Data Scientist for AI Products (Global) Bangalore, Karnataka, India | Working Scheme: On-Site | Job Type: Regular / Permanent / Unlimited / FTE | Reference Code: req23348 It's about Being What's next. What's in it for you? A Data Scientist for AI Products (Global) will be responsible for working in the Artificial Intelligence team, Linde's AI global corporate division engaged with real business challenges and opportunities in multiple countries. Focus of this role is to support the AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain. You'll collaborate across different business and corporate functions in international team composed of Project Managers, Data Scientists, Data and Software Engineers in the AI team and others in the Linde's Global AI team. As a Data Scientist AI, you will support Linde’s AI team with extending existing and building new AI products for a vast amount of uses cases across Linde’s business and value chain" At Linde, the sky is not the limit. If you’re looking to build a career where your work reaches beyond your job description and betters the people with whom you work, the communities we serve, and the world in which we all live, at Linde, your opportunities are limitless. Be Linde. Be Limitless. Team Making an impact. What will you do? You will work directly with a variety of different data sources, types and structures to derive actionable insights Develop, customize and manage AI software products based on Machine and Deep Learning backends will be your tasks Your role includes strong support on replication of existing products and pipelines to other systems and geographies In addition to that you will support in architectural design and defining data requirements for new developments It will be your responsibility to interact with business functions in identifying opportunities with potential business impact and to support development and deployment of models into production Winning in your role. Do you have what it takes? You have a Bachelor or master’s degree in data science, Computational Statistics/Mathematics, Computer Science, Operations Research or related field You have a strong understanding of and practical experience with Multivariate Statistics, Machine Learning and Probability concepts Further, you gained experience in articulating business questions and using quantitative techniques to arrive at a solution using available data You demonstrate hands-on experience with preprocessing, feature engineering, feature selection and data cleansing on real world datasets Preferably you have work experience in an engineering or technology role You bring a strong background of Python and handling large data sets using SQL in a business environment (pandas, numpy, matplotlib, seaborn, sklearn, keras, tensorflow, pytorch, statsmodels etc.) to the role In addition you have a sound knowledge of data architectures and concepts and practical experience in the visualization of large datasets, e.g. with Tableau or PowerBI Result driven mindset and excellent communication skills with high social competence gives you the ability to structure a project from idea to experimentation to prototype to implementation Very good English language skills are required As a plus you have hands-on experience with DevOps and MS Azure, experience in Azure ML, Kedro or Airflow, experience in MLflow or similar Why you will love working for us! Linde is a leading global industrial gases and engineering company, operating in more than 100 countries worldwide. We live our mission of making our world more productive every day by providing high-quality solutions, technologies and services which are making our customers more successful and helping to sustain and protect our planet. On the 1st of April 2020, Linde India Limited and Praxair India Private Limited successfully formed a joint venture, LSAS Services Private Limited. This company will provide Operations and Management (O&M) services to both existing organizations, which will continue to operate separately. LSAS carries forward the commitment towards sustainable development, championed by both legacy organizations. It also takes ahead the tradition of the development of processes and technologies that have revolutionized the industrial gases industry, serving a variety of end markets including chemicals & refining, food & beverage, electronics, healthcare, manufacturing, and primary metals. Whatever you seek to accomplish, and wherever you want those accomplishments to take you, a career at Linde provides limitless ways to achieve your potential, while making a positive impact in the world. Be Linde. Be Limitless. Have we inspired you? Let's talk about it! We are looking forward to receiving your complete application (motivation letter, CV, certificates) via our online job market. Any designations used of course apply to persons of all genders. The form of speech used here is for simplicity only. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, age, disability, protected veteran status, pregnancy, sexual orientation, gender identity or expression, or any other reason prohibited by applicable law. Praxair India Private Limited acts responsibly towards its shareholders, business partners, employees, society and the environment in every one of its business areas, regions and locations across the globe. The company is committed to technologies and products that unite the goals of customer value and sustainable development. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Zuddl is a modular platform for events and webinars that helps event marketers plan and execute events that drive growth. Event teams from global organizations like Microsoft, Google, ServiceNow, Zylo, Postman, TransPerfect and the United Nations trust Zuddl. Our modular approach to event management lets B2B marketers and conferences organizers decide which components they need to build the perfect event and scale their event program. Zuddl is an outcome-oriented platform with a focus on flexibility, and is more partner, less vendor.. FUNDING Zuddl being a part Y-Combinator 2020 batch has raised $13.35 million in Series A funding led by Alpha Wave Incubation and Qualcomm Ventures with participation from our existing investors GrowX ventures and Waveform Ventures. What You'll Do Prototype LLM-powered features using frameworks like LangChain, OpenAI Agents SDK to power content automation and intelligent workflows. Build and optimize Retrieval‑Augmented Generation (RAG) systems: document ingestion, chunking, embedding with vector DBs, and LLM integration. Work with vector databases to implement similarity search for use cases like intelligent Q&A, content recommendation, and context-aware responses. Experiment with prompt engineering and fine-tuning techniques Deploy LLM-based microservices and agents using Docker, K8s and CI/CD best practices. Analyze model metrics, document findings, and suggest improvements based on quantitative evaluations. Collaborate across functions—including product, design, and engineering—to align AI features with business needs and enhance user impact. Requirement Strong Python programming skills. Hands-on with LLMs—experience building, fine-tuning, or applying large language models. Familiarity with agentic AI frameworks, such as LangChain or OpenAI Agents SDK (or any relevant tool). Understanding of RAG architectures and prior implementation in projects or prototypes. Experience with vector databases like FAISS, Opensearch etc. Portfolio of LLM-based projects, demonstrated via GitHub, notebooks, or other coding samples. Good to Have Capability to build full‑stack web applications. Data analytics skills—data manipulation (Pandas/SQL), visualization (Matplotlib/Seaborn/Tableau), and statistical analysis. Worked with PostgreSQL, Metabase or relevant tools/databases. Strong ML fundamentals: regression, classification, clustering, deep learning techniques. Experience building recommender systems or hybrid ML solutions. Experience with deep learning frameworks: PyTorch, TensorFlow (or any relevant tool). Exposure to MLOps/DevOps tooling: Docker, Kubernetes, MLflow, Kubeflow (or any relevant tool). Why You Want To Work Here Opportunity to convert to a Full-Time Role, based on performance and organisational requirements after the end of the internship tenure. A culture built on trust, transparency, and integrity Ground floor opportunity at a fast-growing series A startup Competitive Stipend Work on AI-first features in an event-tech startup with global customers Thrive in a remote-first, empowering culture fueled by ownership and trust Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Analyst Trainee Location: Remote Job Type: Internship (Full-Time) Duration: 1–3 Months Stipend: ₹25,000/month Department: Data & Analytics Job Summary: We are seeking a motivated and analytical Data Analyst Trainee to join our remote analytics team. This internship is perfect for individuals eager to apply their data skills in real-world projects, generate insights, and support business decision-making through analysis, reporting, and visualization. Key Responsibilities: Collect, clean, and analyze large datasets from various sources Perform exploratory data analysis (EDA) and generate actionable insights Build interactive dashboards and reports using Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and manipulation Collaborate with cross-functional teams to understand data needs Document analytical methodologies, insights, and recommendations Qualifications: Bachelor’s degree (or final-year student) in Data Science, Statistics, Computer Science, Mathematics, or a related field Proficiency in Excel and SQL Working knowledge of Python (Pandas, NumPy, Matplotlib) or R Understanding of basic statistics and analytical methods Strong attention to detail and problem-solving ability Ability to work independently and communicate effectively in a remote setting Preferred Skills (Nice to Have): Experience with BI tools like Power BI, Tableau, or Google Data Studio Familiarity with cloud data platforms (e.g., BigQuery, AWS Redshift) Knowledge of data storytelling and KPI measurement Previous academic or personal projects in analytics What We Offer: Monthly stipend of ₹25,000 Fully remote internship Mentorship from experienced data analysts and domain experts Hands-on experience with real business data and live projects Certificate of Completion Opportunity for a full-time role based on performance Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Analysis Intern Location: Remote Job Type: Internship (Full-Time) Duration: 1–3 Months Stipend: ₹25,000/month Department: Data & Analytics Job Summary: We are seeking a detail-oriented and analytical Data Analysis Intern to join our remote data team. This internship is ideal for individuals looking to apply their skills in statistics, data handling, and business intelligence to real-world problems. You will gain hands-on experience with data tools and contribute to meaningful data-driven decision-making. Key Responsibilities: Collect, clean, and preprocess data from various sources Perform exploratory data analysis (EDA) and identify trends, patterns, and insights Create visualizations and dashboards to present findings using tools like Excel, Power BI, or Tableau Assist in building reports and communicating insights to different teams Document analytical processes and ensure data accuracy and consistency Collaborate with cross-functional teams to support ongoing data initiatives Qualifications: Bachelor’s degree (or final year student) in Data Science, Statistics, Computer Science, Economics, or related field Strong skills in Excel, SQL, and Python or R Understanding of basic statistical concepts and data analysis techniques Familiarity with data visualization tools such as Power BI, Tableau, or Matplotlib Good problem-solving skills and attention to detail Ability to work independently in a remote environment Preferred Skills (Nice to Have): Experience working with large datasets or real-world business data Knowledge of A/B testing, correlation analysis, or regression techniques Exposure to data cleaning and automation tools Familiarity with Jupyter Notebooks, Google Sheets, or cloud data tools What We Offer: Monthly stipend of ₹25,000 100% remote internship Exposure to real-world business and product data Mentorship from experienced data analysts and domain experts Certificate of Completion Opportunity for full-time placement based on performance Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Dear Associates Greetings from TATA Consultancy Services!! Thank you for expressing your interest in exploring a career possibility with the TCS Family Hiring For : Python AI ML, MlOPs Must Have : Spark, Hadoop,PyTorch, TensorFlow,Matplotlib, Seaborn, Tableau, Power BI,scikit-learn, TensorFlow, XGBoost,AWS,Azure , AWS, Databricks,Pyspark, Python,SQL, Snowflake, Experience: 5+ yrs Location : Mumbai / Pune If interested kindly fill the details and send your resume at nitu.sadhukhan@tcs.com . Note: only Eligible candidates with Relevant experience will be contacted further Name Contact No: Email id: Current Location: Preferred Location: Highest Qualification (Part time / Correspondence is not Eligible) : Year of Passing (Highest Qualification): Total Experience: Relevant Experience : Current Organization: Notice Period: Current CTC: Expected CTC: Pan Number : Gap in years if any (Education / Career): Updated CV attached (Yes / No) ? IF attended any interview with TCS in last 6 months : Available For walk In drive on 14th June _Pune : Thanks & Regards, Nitu Sadhukhan Talent Acquisition Group Tata Consultancy Services Lets Connect : linkedin.com/in/nitu-sadhukhan-16a580179 Nitu.sadhukhan@tcs.com Show more Show less

Posted 6 days ago

Apply

0 years

0 - 0 Lacs

Cochin

On-site

We are seeking a dynamic and experienced AI Trainer with expertise in Machine Learning, Deep Learning, and Generative AI including LLMs (Large Language Models) . The candidate will train students and professionals in real-world applications of AI/ML as well as the latest trends in GenAI such as ChatGPT, LangChain, Hugging Face Transformers, Prompt Engineering, and RAG (Retrieval-Augmented Generation) . Key Responsibilities: Deliver hands-on training sessions in AI, ML, Deep Learning , and Generative AI . Teach the fundamentals and implementation of algorithms like regression, classification, clustering, decision trees, neural networks, CNNs, and RNNs. Train students in LLMs (e.g., OpenAI GPT, Meta LLaMA, Google Gemini) and prompt engineering techniques . LangChain Hugging Face Transformers LLM APIs (OpenAI, Cohere, Anthropic, Google Vertex AI) Vector databases (FAISS, Pinecone, Weaviate) RAG pipelines Design and evaluate practical labs and capstone projects (e.g., chatbot, image generator, smart assistants). Keep training materials updated with latest industry developments and tools. Provide mentorship for student projects and support during hackathons or workshops. Required Skills: AI/ML Core: Python, NumPy, pandas, scikit-learn, Matplotlib, Jupyter Good knowledge in Machine Learning and Deep Learning algorithms Deep Learning: TensorFlow / Keras / PyTorch OpenCV (for Computer Vision), NLTK/spaCy (for NLP) Generative AI & LLM: Prompt engineering (zero-shot, few-shot, chain-of-thought) LangChain and LlamaIndex (RAG frameworks) Hugging Face Transformers OpenAI API, Cohere, Anthropic, Google Gemini, etc. Vector DBs like FAISS, ChromaDB, Pinecone, Weaviate Streamlit, Gradio (for app prototyping) Qualifications: B.E/B.Tech/M.Tech/M.Sc in AI, Data Science, Computer Science, or related Practical experience in AI/ML, LLMs, or GenAI projects Previous experience as a developer/trainer/corporate instructor is a plus Salary / Remuneration: ₹30,000 – ₹75,000/month based on experience and engagement type Job Type: Full-time Pay: ₹30,000.00 - ₹75,000.00 per month Schedule: Day shift Application Question(s): How many years of experience you have ? Can you commute to Kakkanad, Kochi ? What is your expected Salary ? Work Location: In person

Posted 6 days ago

Apply

3.0 years

0 Lacs

Mohali

On-site

Chicmic Studios Job Role: Data Scientist Experience Required: 3+ Years Skills Required: Data Science, Python, Pandas, Matplotlibs Job Description: We are seeking a Data Scientist with strong expertise in data analysis, machine learning, and visualization. The ideal candidate should be proficient in Python, Pandas, and Matplotlib, with experience in building and optimizing data-driven models. Some experience in Natural Language Processing (NLP) and Named Entity Recognition (NER) models would be a plus. Roles & Duties: Analyze and process large datasets using Python and Pandas. Develop and optimize machine learning models for predictive analytics. Create data visualizations using Matplotlib and Seaborn to support decision-making. Perform data cleaning, feature engineering, and statistical analysis. Work with structured and unstructured data to extract meaningful insights. Implement and fine-tune NER models for specific use cases (if required). Collaborate with cross-functional teams to drive data-driven solutions Required Skills & Qualifications: Strong proficiency in Python and data science libraries (Pandas, NumPy, Scikit-learn, etc.). Experience in data analysis, statistical modeling, and machine learning. Hands-on expertise in data visualization using Matplotlib and Seaborn. Understanding of SQL and database querying. Familiarity with NLP techniques and NER models is a plus. Strong problem-solving and analytical skills. Contact: 9875952836 Office Address: F273, Phase 8B industrial Area, Mohali, Punjab. Job Type: Full-time Schedule: Day shift Monday to Friday Work Location: In person

Posted 6 days ago

Apply

10.0 years

0 Lacs

Chennai

On-site

As a Principal AI Engineer, he will be part of a high performing team working on exciting opportunities in AI within Ford Credit. We are looking for a highly skilled, technical, hands-on AI engineer with a solid background in building end-to-end AI applications, exhibiting a strong aptitude for learning and keeping up with the latest advances in AIDevelop Machine Learning (Supervised/Unsupervised learning), Neural Networks (ANN, CNN, RNN, LSTM, Decision tree, Encoder, Decoder), Natural Language Processing, Generative AI (LLMs, Lang Chain, RAG, Vector Database) . He should be able to lead technical discussion and technical mentor for the team. Professional Experience: Potential candidates should possess 10+ years of strong working experience in AI. BE/MSc/ MTech /ME/PhD (Computer Science/Maths, Statistics). Possess a strong analytical mindset and be very comfortable with data. Experience with handling both relational and non-relational data. Hands-on experience with analytics methods (descriptive/predictive/prescriptive), Statistical Analysis, Probability and Data Visualization tools (Python-Matplotlib, Seaborn). Background of Software engineering with excellent Data Science working experience. Technical Experience: Develop Machine Learning (Supervised/Unsupervised learning), Neural Networks (ANN, CNN, RNN, LSTM, Decision tree, Encoder, Decoder), Natural Language Processing, Generative AI (LLMs, Lang Chain, RAG, Vector Database) . Excellent in communication and presentation skills. Ability to do stakeholder management. Ability to collaborate with a cross-functional team involving data engineers, solution architects, application engineers, and product teams across time zones to develop data and model pipelines. Ability to drive and mentor the team technically, leveraging cutting edge AI and Machine Learning principles and develop production-ready AI solutions. Mentor the team of data scientists and assume responsible for the delivery of use cases. Ability to scope the problem statement, data preparation, training and making the AI model production ready. Work with business partners to understand the problem statement, translate the same into analytical problem. Ability to manipulate structured and unstructured data. Develop, test and improve existing machine learning models. Analyse large and complex data sets to derive valuable insights. Research and implement best practices to enhance existing machine learning infrastructure. Develop prototypes for future exploration. Design and evaluate approaches for handling large volume of real data streams. Ability to determine appropriate analytical methods to be used. Understanding of statistics and hypothesis testing.

Posted 6 days ago

Apply

8.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Data Analyst Trainee Location: Remote Job Type: Internship (Full-Time) Duration: 1–3 Months Stipend: ₹25,000/month Department: Data & Analytics Job Summary: We are seeking a motivated and analytical Data Analyst Trainee to join our remote analytics team. This internship is perfect for individuals eager to apply their data skills in real-world projects, generate insights, and support business decision-making through analysis, reporting, and visualization. Key Responsibilities: Collect, clean, and analyze large datasets from various sources Perform exploratory data analysis (EDA) and generate actionable insights Build interactive dashboards and reports using Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and manipulation Collaborate with cross-functional teams to understand data needs Document analytical methodologies, insights, and recommendations Qualifications: Bachelor’s degree (or final-year student) in Data Science, Statistics, Computer Science, Mathematics, or a related field Proficiency in Excel and SQL Working knowledge of Python (Pandas, NumPy, Matplotlib) or R Understanding of basic statistics and analytical methods Strong attention to detail and problem-solving ability Ability to work independently and communicate effectively in a remote setting Preferred Skills (Nice to Have): Experience with BI tools like Power BI, Tableau, or Google Data Studio Familiarity with cloud data platforms (e.g., BigQuery, AWS Redshift) Knowledge of data storytelling and KPI measurement Previous academic or personal projects in analytics What We Offer: Monthly stipend of ₹25,000 Fully remote internship Mentorship from experienced data analysts and domain experts Hands-on experience with real business data and live projects Certificate of Completion Opportunity for a full-time role based on performance Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

16.0 years

1 - 6 Lacs

Noida

On-site

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: WHAT Business Knowledge: Capable of understanding the requirements for the entire project (not just own features) Capable of working closely with PMG during the design phase to drill down into detailed nuances of the requirements Has the ability and confidence to question the motivation behind certain requirements and work with PMG to refine them. Design: Can design and implement machine learning models and algorithms Can articulate and evaluate pros/cons of different AI/ML approaches Can generate cost estimates for model training and deployment Coding/Testing: Builds and optimizes machine learning pipelines Knows & brings in external ML frameworks and libraries Consistently avoids common pitfalls in model development and deployment HOW Quality: Solves cross-functional problems using data-driven approaches Identifies impacts/side effects of models outside of immediate scope of work Identifies cross-module issues related to data integration and model performance Identifies problems predictively using data analysis Productivity: Capable of working on multiple AI/ML projects simultaneously and context switching between them Process: Enforces process standards for model development and deployment. Independence: Acts independently to determine methods and procedures on new or special assignments Prioritizes large tasks and projects effectively Agility: Release Planning: Works with the PO to do high-level release commitment and estimation Works with PO on defining stories of appropriate size for model development Agile Maturity: Able to drive the team to achieve a high level of accomplishment on the committed stories for each iteration Shows Agile leadership qualities and leads by example WITH Team Work: Capable of working with development teams and identifying the right division of technical responsibility based on skill sets. Capable of working with external teams (e.g., Support, PO, etc.) that have significantly different technical skill sets and managing the discussions based on their needs Initiative: Capable of creating innovative AI/ML solutions that may include changes to requirements to create a better solution Capable of thinking outside-the-box to view the system as it should be rather than only how it is Proactively generates a continual stream of ideas and pushes to review and advance ideas if they make sense Takes initiative to learn how AI/ML technology is evolving outside the organization Takes initiative to learn how the system can be improved for the customers Should make problems open new doors for innovations Communication: Communicates complex AI/ML concepts internally with ease Accountability: Well versed in all areas of the AI/ML stack (data preprocessing, model training, evaluation, deployment, etc.) and aware of all components in play Leadership: Disagree without being disagreeable Use conflict as a way to drill deeper and arrive at better decisions Frequent mentorship Builds ad-hoc cross-department teams for specific projects or problems Can achieve broad scope 'buy in' across project teams and across departments Takes calculated risks Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: B.E/B.Tech/MCA/MSc/MTech (Minimum 16 years of formal education, Correspondence courses are not relevant) 5+ years of experience working on multiple layers of technology Experience deploying and maintaining ML models in production Experience in Agile teams Experience with one or more data-oriented workflow orchestration frameworks (Airflow, KubeFlow etc.) Working experience or good knowledge of cloud platforms (e.g., Azure, AWS, OCI) Ability to design, implement, and maintain CI/CD pipelines for MLOps and DevOps function Familiarity with traditional software monitoring, scaling, and quality management (QMS) Knowledge of model versioning and deployment using tools like MLflow, DVC, or similar platforms Familiarity with data versioning tools (Delta Lake, DVC, LakeFS, etc.) Demonstrate hands-on knowledge of OpenSource adoption and use cases Good understanding of Data/Information security Proficient in Data Structures, ML Algorithms, and ML lifecycle Product/Project/Program Related Tech Stack: Machine Learning Frameworks: Scikit-learn, TensorFlow, PyTorch Programming Languages: Python, R, Java Data Processing: Pandas, NumPy, Spark Visualization: Matplotlib, Seaborn, Plotly Familiarity with model versioning tools (MLFlow, etc.) Cloud Services: Azure ML, AWS SageMaker, Google Cloud AI GenAI: OpenAI, Langchain, RAG etc. Demonstrate good knowledge in Engineering Practices Demonstrates excellent problem-solving skills Proven excellent verbal, written, and interpersonal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.

Posted 6 days ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Position: Data Analyst Intern (Full-Time) Company: Lead India Location: Remote Stipend: ₹25,000/month Duration: 1–3 months (Full-Time Internship) About Lead India: Lead India is a forward-thinking technology company that helps businesses make smarter decisions through data. We provide meaningful internship opportunities for emerging professionals to gain real-world experience in data analysis, reporting, and decision-making. Role Overview: We are seeking a Data Analyst Intern to support our data and product teams in gathering, analyzing, and visualizing business data. This internship is ideal for individuals who enjoy working with numbers, identifying trends, and turning data into actionable insights. Key Responsibilities: Analyze large datasets to uncover patterns, trends, and insights Create dashboards and reports using tools like Excel, Power BI, or Tableau Write and optimize SQL queries for data extraction and analysis Assist in data cleaning, preprocessing, and validation Collaborate with cross-functional teams to support data-driven decisions Document findings and present insights to stakeholders Skills We're Looking For: Strong analytical and problem-solving skills Basic knowledge of SQL and data visualization tools (Power BI, Tableau, or Excel) Familiarity with Python for data analysis (pandas, matplotlib) is a plus Good communication and presentation skills Detail-oriented with a willingness to learn and grow What You’ll Gain: ₹25,000/month stipend Real-world experience in data analysis and reporting Mentorship from experienced analysts and developers Remote-first, collaborative work environment Potential for a Pre-Placement Offer (PPO) based on performance Show more Show less

Posted 6 days ago

Apply

2.0 years

0 Lacs

Surat, Gujarat, India

On-site

Linkedin logo

We’re hiring a Python Developer with a strong understanding of Artificial Intelligence and Machine Learning. You will be responsible for designing, developing, and deploying scalable AI/ML solutions and Python-based backend applications. Note: Only Surat-Gujarat based candidate apply for this job. Role Expectations : Develop and maintain robust Python code for backend and AI/ML applications. Design and implement machine learning models for prediction, classification, recommendation, etc. Work on data preprocessing, feature engineering, model training, evaluation, and optimization. Collaborate with the frontend team, data scientists, and DevOps engineers to deploy ML models to production. Integrate AI/ML models into web or mobile applications. Write clean, efficient, and well-documented code. Stay updated with the latest trends and advancements in Python and ML. Soft skills : Problem-Solving Analytical Thinking Collaboration & Teamwork Time Management Attention to Detail Required Skills: Proficiency in Python and Python-based libraries (NumPy, Pandas, Scikit-learn, etc.). Hands-on experience with AI/ML model development and deployment. Familiarity with TensorFlow, Keras, or PyTorch. Strong knowledge of data structures, algorithms, and object-oriented programming. Experience with REST APIs and Flask/Django frameworks. Basic knowledge of data visualization tools like Matplotlib or Seaborn. Understanding of version control tools (Git/GitHub). Good to Have: Experience with cloud platforms (AWS, GCP, or Azure). Knowledge of NLP, computer vision, or deep learning models. Experience working with large datasets and databases (SQL, MongoDB). Familiarity with containerization tools like Docker. Our Story : We’re chasing a world where tech doesn’t frustrate—it flows like a river carving its own path. Every line of code we hammer out is a brick in a future where tools don’t just function—they vanish into the background, so intuitive you barely notice them working their magic. We craft software and apps that tackle real problems head-on, not just pile up shiny features for the sake of a spec sheet. It starts with listening—really listening—to the headaches, the what-ifs, and the crazy ambitions others might shrug off. Then we build smart: solutions that cut through the clutter with surgical precision, designed to fit like a glove and run like a rocket. Unlock The Advantage : 5-Days a week 12 paid leave + public holidays Training and Development : Certifications Employee engagement activities: awards, community gathering Good Infrastructure & Onsite opportunity Flexible working culture Experience: 2-3 Years Job Type: Full Time (On-site) Show more Show less

Posted 6 days ago

Apply

0.0 - 1.0 years

0 Lacs

Kazhakoottam, Thiruvananthapuram, Kerala

On-site

Indeed logo

Urgent Hiring: Offline Trainers (Full-Time) – Data Science Academy, Kerala Location: Thiruvananthapuram & Kochi, Kerala (Offline / In-Person Only) Job Type: Full-Time Join Date: Immediate About Us: Data Science Academy is Kerala’s first dedicated AI and Data Science training institute , committed to shaping the next generation of tech professionals. We are rapidly expanding and seeking passionate educators to join our mission. We’re Hiring Trainers With Expertise in ANY of the Following Areas: Microsoft Excel (Advanced) Database Python Programming Power BI Python Libraries (NumPy, Pandas, Matplotlib, etc.) Machine Learning Deep Learning Generative AI Cloud Computing (AWS / Azure / GCP) Who Can Apply: Industry professionals with teaching flair Academic trainers with practical exposure Freelancers looking for consistent offline assignments Freshers with strong domain knowledge and communication skills Requirements: Strong subject expertise in at least one or more of the areas listed Excellent communication and presentation skills Must be willing to train students in an offline classroom setting in Kerala (primarily at our Thiruvananthapuram campus) Availability for immediate joining is preferred What We Offer: Chance to be part of Kerala’s pioneering AI education brand Opportunity to mentor future data scientists and AI engineers Career growth in training, curriculum development, and industry exposure Job Type: Full-time Education: Bachelor's (Preferred) Experience: total work: 1 year (Preferred) Python: 1 year (Preferred) Training & development: 1 year (Preferred) Language: English (Preferred) Work Location: In person

Posted 6 days ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

8.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. RCE-Risk Data Engineer-Leads Job Description: - Our Technology team builds innovative digital solutions rapidly and at scale to deliver the next generation of Financial and Non- Financial services across the globe. The Position is a senior technical, hands-on delivery role, requiring the knowledge of data engineering, cloud infrastructure and platform engineering, platform operations and production support using ground-breaking cloud and big data technologies. The ideal candidate with 8-10 years of relevant experience, will possess strong technical skills, an eagerness to learn, a keen interest on 3 keys pillars that our team support i.e. Financial Crime, Financial Risk and Compliance technology transformation, the ability to work collaboratively in fast-paced environment, and an aptitude for picking up new tools and techniques on the job, building on existing skillsets as a foundation. In this role you will: Develop, maintain and optimize backend systems and RESTFul APIs using Python and Flask Proficient in concurrent processing strategies and performance optimization for complex architectures Write clean, maintainable and well-documented code Develop comprehensive test suites to ensure code quality and reliability Work independently to deliver features and fix issues, with a few hours of overlap for real-time collaboration Integrate backend services with databases and APIs Collaborate asynchronously with cross functional team members Participate in occasional team meetings, code reviews and planning sessions. Core/Must Have Skills. Should have minimum 6+ years of Professional Python Development experience. Should have Strong understanding of Computer science fundamentals (Data Structures, Algorithms). Should have 6+ years of experience in Flask and Restful API Development Should possess Knowledge on container technologies (Dockers, Kubernetes) Should possess experience on implementing interfaces in Python Should know how to use python generators for efficient memory management. Should have good understanding of Pandas, NumPy and Matplotlib library for data analytics and reporting. Should know how to implement multi-threading and enforce parallelism in python. Should know to various. Should know to how to use Global interpreter lock (GIL) in python and its implications on multithreading and multiprocessing. Should have a good understanding of SQL alchemy to interact with databases. Should posses’ knowledge on implementing ETL transformations using python libraries. Collaborate with cross-functional teams to ensure successful implementation techniques of performing list compressions in python of solutions. Good to have: Exposure to Data Science libraries or data-centric development Understanding of authentication and authorization (e.g. JWT, OAuth) Basic knowledge of frontend technologies (HTML/CSS/JavaScript) is a bonus but not required. Experience with cloud services (AWS, GCP or Azure) EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 6 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

What You'll be doing: Dashboard Development: Design, develop, and maintain interactive and visually compelling dashboards using Power BI. Implement DAX queries and data models to support business intelligence needs. Optimize performance and usability of dashboards for various stakeholders. Python & Streamlit Applications: Build and deploy lightweight data applications using Streamlit for internal and external users. Integrate Python libraries (e.g., Pandas, NumPy, Plotly, Matplotlib) for data processing and visualization. Data Integration & Retrieval: Connect to and retrieve data from RESTful APIs, cloud storage (e.g., Azure Data Lake, Cognite Data Fusion, and SQL/NoSQL databases. Automate data ingestion pipelines and ensure data quality and consistency. Collaboration & Reporting: Work closely with business analysts, data engineers, and stakeholders to gather requirements and deliver insights. Present findings and recommendations through reports, dashboards, and presentations. Requirements: Bachelor’s or master’s degree in computer science, Data Science, Information Systems, or a related field. 3+ years of experience in data analytics or business intelligence roles. Proficiency in Power BI, including DAX, Power Query, and data modeling. Strong Python programming skills, especially with Streamlit, Pandas, and API integration. Experience with REST APIs, JSON/XML parsing, and cloud data platforms (Azure, AWS, or GCP). Familiarity with version control systems like Git. Excellent problem-solving, communication, and analytical skills. Preferred Qualifications: Experience with CI/CD pipelines for data applications. Knowledge of DevOps practices and containerization (Docker). Exposure to machine learning or statistical modeling is a plus. Show more Show less

Posted 6 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies