Jobs
Interviews

1551 Pandas Jobs - Page 12

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 7.0 years

0 Lacs

karnataka

On-site

NTT DATA is looking for a Senior Python Engineer to join the team in Bangalore, Karntaka (IN-KA), India. As a Senior Python Engineer, you will be part of the C3 Data Warehouse team, focusing on building the next-gen data platform that sources and stores data from various technology systems across the firm into a centralized data platform. This platform empowers reporting and analytics solutions for the Technology Risk functions within Morgan Stanley. Your primary responsibilities will include contributing to the development of a unified data pipeline framework in Python, utilizing technologies such as Airflow, DBT, Spark, and Snowflake. Additionally, you will integrate this framework with existing internal platforms for data quality, cataloging, discovery, incident logging, and metric generation. Collaboration with data warehousing leads, data analysts, ETL developers, infrastructure engineers, and data analytics teams will be essential to successfully implement the data platform and pipeline framework. Your key duties will include developing various components in Python for the unified data pipeline framework, establishing best practices for optimal Snowflake usage, assisting with testing and deployment using standard frameworks and CI/CD tooling, monitoring query performance and data loads, and providing guidance during QA & UAT phases to identify issues and determine the best resolutions. The ideal candidate should have at least 5 years of experience in data development and solutions in complex data environments, with expertise in developing data pipelines and warehousing solutions using Python and libraries like Pandas, NumPy, PySpark. Experience in hybrid data environments (on-Prem and Cloud) and exposure to Power BI/Snowflake are also required. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. Committed to helping clients innovate, optimize, and transform for long-term success, NTT DATA has diverse experts in over 50 countries and a robust partner ecosystem. Services provided include business and technology consulting, data and artificial intelligence, industry solutions, as well as development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is a leading provider of digital and AI infrastructure globally and is part of the NTT Group, investing significantly in R&D to support organizations and society in transitioning confidently and sustainably into the digital future. Visit us at us.nttdata.com.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

delhi

On-site

Are you a skilled professional with experience in SQL, Python (Pandas & SQLAlchemy), and data engineering We have an exciting opportunity for an ETL Developer to join our team! As an ETL Developer, you will be responsible for working with MS SQL, Python, and various databases to extract, transform, and load data for insights and business goals. You should have a Bachelor's degree in Computer Science or a related field, or equivalent work experience. Additionally, you should have at least 5 years of experience working with MS SQL, 3 years of experience with Python (Pandas, SQLAlchemy), and 3 years of experience supporting on-call challenges. Key responsibilities include running SQL queries on multiple disparate databases, working with large datasets using Python and Pandas, tuning MS SQL queries, debugging data using Python and SQLAlchemy, collaborating in an agile environment, managing source control with GitLab and GitHub, creating and maintaining databases, interpreting complex data for insights, and familiarity with Azure, ADF, Spark, and Scala concepts. If you're passionate about data, possess a strong problem-solving mindset, and thrive in a collaborative environment, we encourage you to apply for this position. For more information or to apply, please send your resume to samdarshi.singh@mwidm.com or contact us at +91 62392 61536. Join us in this exciting opportunity to contribute to our data engineering team! #ETLDeveloper #DataEngineer #Python #SQL #Pandas #SQLAlchemy #Spark #Azure #Git #TechCareers #JobOpportunity #Agile #DataAnalysis #SQLTuning #OnCallSupport,

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

punjab

On-site

We are seeking a passionate Data Science fresher who has completed at least 6 months of practical training, internship, or project experience in the data science field. In this role, you will have the exciting opportunity to utilize your analytical and problem-solving skills on real-world datasets while collaborating closely with experienced data scientists and engineers. Your responsibilities will include assisting in data collection, cleaning, and preprocessing from various sources, supporting the team in building, evaluating, and optimizing machine learning models, performing exploratory data analysis (EDA) to extract insights and patterns, working on data visualization dashboards and reports using tools like Power BI, Tableau, or Matplotlib/Seaborn, collaborating with senior data scientists and domain experts on ongoing projects, documenting findings, code, and models in a structured manner, and continuously learning and adopting new techniques, tools, and frameworks. To be successful in this role, you should hold a Bachelor's degree in Computer Science, Statistics, Mathematics, Engineering, or a related field, along with a minimum of 6 months of internship/training experience in data science, analytics, or machine learning. Proficiency in Python (Pandas, NumPy, Scikit-learn, etc.), understanding of machine learning algorithms (supervised/unsupervised), knowledge of SQL and database concepts, familiarity with data visualization tools/libraries, and a basic understanding of statistics and probability are required technical skills. Additionally, you should possess strong analytical thinking and problem-solving abilities, good communication and teamwork skills, and an eagerness to learn and grow in a dynamic environment. Exposure to cloud platforms (AWS, GCP, Azure), experience with big data tools (Spark, Hadoop), and knowledge of deep learning frameworks (TensorFlow, PyTorch) are considered advantageous but not mandatory. In return, we offer you the opportunity to work on real-world data science projects, mentorship from experienced professionals in the field, a collaborative, innovative, and supportive work environment, and a growth path to advance into a full-time Data Scientist role with us. This is a full-time, permanent position suitable for fresher candidates. The benefits include health insurance, and the work schedule consists of day shifts from Monday to Friday. Fluency in English is preferred, and the work location is in person.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

As a Python Developer, you will be responsible for building supervised (GLM ensemble techniques) and unsupervised (clustering) models using standard industry libraries such as pandas, scikit-learn, and keras. Your expertise in big data technologies like Spark and Dask, as well as databases including SQL and NoSQL, will be essential in this role. You should have significant experience in Python, with a focus on writing unit tests, creating packages, and developing reusable and maintainable code. Your ability to comprehend and articulate modeling techniques, along with visualizing analytical results using tools like matplotlib, seaborn, plotly, D3, and Tableau, will be crucial. Experience with continuous integration and development tools like Jenkins, as well as Spark ML pipelines, will be advantageous. We are looking for a self-motivated individual who can collaborate effectively with colleagues and contribute innovative ideas to enhance our projects. Preferred qualifications include an advanced degree with a strong foundation in the mathematics behind machine learning, including linear algebra and multivariate calculus. Experience in specialist areas such as reinforcement learning, NLP, Bayesian techniques, or generative models will be a plus. You should excel at presenting ideas and analytical findings in a compelling manner that influences decision-making. Demonstrated evidence of implementing analytical solutions in an industry context and a genuine enthusiasm for leveraging data science to enhance customer-centricity in financial services ethically are highly desirable qualities for this role.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

pune, maharashtra

On-site

As part of ACL Digital, an ALTEN Group Company, you will be contributing to digital product innovation and engineering as a key player. Our focus lies in assisting our clients in the design and development of cutting-edge products that are AI, Cloud, and Mobile ready, along with creating content and commerce-driven platforms. Through a design-led Digital Transformation framework, we facilitate the creation of connected, converged digital experiences tailored for the modern world. By leveraging our expertise in strategic design, engineering, and industry knowledge, you will play a crucial role in helping our clients navigate the digital landscape, thereby accelerating their growth trajectory. Headquartered in Silicon Valley, ACL Digital is a frontrunner in design-led digital experiences, innovation, enterprise modernization, and product engineering services, particularly within the Technology, Media & Telecom sectors. We are proud of our diverse and skilled workforce, which is part of the larger ALTEN Group comprising over 50,000 employees spread across 30+ countries, fostering a multicultural workplace and a collaborative knowledge-sharing environment. In India, our operations span across Bangalore, Chennai, Pune, Panjim, Hyderabad, Noida, and Ahmedabad, while in the USA, we have established offices in California, Atlanta, Philadelphia, and Washington states. As a suitable candidate for this role, you are expected to possess the following technical skills and competencies: - A minimum of 4-5 years of relevant experience in the field - Preferably trained or certified in Data Science/Machine Learning - Capable of effectively collaborating with technical leads - Strong communication skills coupled with the ability to derive meaningful conclusions - Proficiency in Data Science concepts, Machine Learning algorithms & Libraries like Scikit-learn, Numpy, Pandas, Stattools, Tensorflow, PyTorch, XGBoost - Experience in Machine Learning Training and Deployment pipelines - Familiarity with FastAPI/Flask framework - Proficiency in Docker and Virtual Environment - Proficient in Database Operations - Strong analytical and problem-solving skills - Ability to excel in a dynamic environment with varying degrees of ambiguity Your role would involve the following responsibilities and competencies: - Applying data mining, quantitative analysis, statistical techniques, and conducting experiments to derive reliable insights from data - Understanding business use-cases and utilizing various sources to collect and annotate datasets for business problems - Possessing a strong academic background with excellent analytical skills and exposure to machine learning and information retrieval domain and technologies - Strong programming skills with the ability to work in languages such as Python, C/C++ - Acquiring data from primary or secondary sources and performing data tagging - Filtering and cleaning data based on business requirements and maintaining a well-defined, structured, and clean database - Working on data labeling tools and annotating data for machine learning models - Interpreting data, analyzing results using statistical techniques and models, and conducting exploratory analysis If you are someone who thrives in a challenging and dynamic environment and possesses the required technical skills and competencies, we look forward to having you join our team at ACL Digital.,

Posted 1 week ago

Apply

3.0 - 8.0 years

11 - 16 Lacs

Hyderabad

Work from Office

About ValGenesis ValGenesis is a leading digital validation platform provider for life sciences companies. ValGenesis suite of products are used by 30 of the top 50 global pharmaceutical and biotech companies to achieve digital transformation, total compliance and manufacturing excellence/intelligence across their product lifecycle. Learn more about working for ValGenesis, the de facto standard for paperless validation in Life Sciences: https://www.youtube.com/watch?v=tASq7Ld0JsQ About the Role: We are seeking a highly skilled Senior AI/ML Engineer to join our dynamic team to build the next gen applications for our global customers. If you are a technology enthusiast and highly passionate, we are eager to discuss with you about the potential role. Responsibilities: Implement, and deploy Machine Learning solutions to solve complex problems and deliver real business value, i.e. revenue, engagement, and customer satisfaction. Collaborate with data product managers, software engineers and SMEs to identify AI/ML opportunities for improving process efficiency. Develop production-grade ML models to enhance customer experience, content recommendation, content generation, and predictive analysis. Monitor and improve model performance via data enhancement, feature engineering, experimentation and online/offline evaluation. Stay up to date with the latest in machine learning and artificial intelligence and influence AI/ML for the Life science industry. Responsibilities 4 - 8 years of experience in AI/ML engineering, with a track record of handling increasingly complex projects. Strong programming skills in Python, Rust. Experience with Pandas, NumPy, SciPy, OpenCV (for image processing) Experience with ML frameworks, such as scikit-learn, Tensorflow, PyTorch. Experience with GenAI tools, such as Langchain, LlamaIndex, and open-source Vector DBs. Experience with one or more Graph DBs - Neo4J, ArangoDB Experience with MLOps platforms, such as Kubeflow or MLFlow. Expertise in one or more of the following AI/ML domains: Causal AI, Reinforcement Learning, Generative AI, NLP, Dimension Reduction, Computer Vision, Sequential Models. Expertise in building, deploying, measuring, and maintaining machine learning models to address real-world problems. Thorough understanding of software product development lifecycle, DevOps (build, continuous integration, deployment tools) and best practices. Excellent written and verbal communication skills and interpersonal skills. Advanced degree in Computer Science, Machine Learning or related field. We’re on a Mission In 2005, we disrupted the life sciences industry by introducing the world’s first digital validation lifecycle management system. ValGenesis VLMS® revolutionized compliance-based corporate validation activities and has remained the industry standard. Today, we continue to push the boundaries of innovation enhancing and expanding our portfolio beyond validation with an end-to-end digital transformation platform. We combine our purpose-built systems with world-class consulting services to help every facet of GxP meet evolving regulations and quality expectations. The Team You’ll Join Our customers’ success is our success. We keep the customer experience centered in our decisions, from product to marketing to sales to services to support. Life sciences companies exist to improve humanity’s quality of life, and we honor that mission. We work together. We communicate openly, support each other without reservation, and never hesitate to wear multiple hats to get the job done. We think big. Innovation is the heart of ValGenesis. That spirit drives product development as well as personal growth. We never stop aiming upward. We’re in it to win it. We’re on a path to becoming the number one intelligent validation platform in the market, and we won’t settle for anything less than being a market leader. How We Work Our Chennai, Hyderabad and Bangalore offices are onsite, 5 days per week. We believe that in-person interaction and collaboration fosters creativity, and a sense of community, and is critical to our future success as a company. ValGenesis is an equal-opportunity employer that makes employment decisions on the basis of merit. Our goal is to have the best-qualified people in every job. All qualified applicants will receive consideration for employment without regard to race, religion, sex, sexual orientation, gender identity, national origin, disability, or any other characteristics protected by local law.

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Python Backend Developer at Coforge, you should have 4-6 years of experience working with Python as a backend technology, particularly in stack development. You must also possess expertise in microservices and APIs. Proficiency in core Python fundamentals, Pandas, and Numpy is essential for this role. Additionally, you will have the opportunity to be trained in React JS. At Coforge, we are currently looking to hire both full-time employees and freelancers.,

Posted 1 week ago

Apply

6.0 - 12.0 years

0 Lacs

pune, maharashtra

On-site

About Calfus: Calfus is a Silicon Valley headquartered software engineering and platforms company with a vision deeply rooted in the Olympic motto "Citius, Altius, Fortius Communiter". At Calfus, we aim to inspire our team to rise faster, higher, and stronger while fostering a collaborative environment to build software at speed and scale. Our primary focus is on creating engineered digital solutions that drive positive impact on business outcomes. Upholding principles of #Equity and #Diversity, we strive to create a diverse ecosystem that extends to the broader society. Join us at #Calfus and embark on an extraordinary journey with us! Position Overview: As a Data Engineer specializing in BI Analytics & DWH, you will be instrumental in crafting and implementing robust business intelligence solutions that empower our organization to make informed, data-driven decisions. Leveraging your expertise in Power BI, Tableau, and ETL processes, you will be responsible for developing scalable architectures and interactive visualizations. This role necessitates a strategic mindset, strong technical acumen, and effective collaboration with stakeholders across all levels. Key Responsibilities: - BI Architecture & DWH Solution Design: Develop and design scalable BI Analytical & DWH Solution aligning with business requirements, utilizing tools like Power BI and Tableau. - Data Integration: Supervise ETL processes through SSIS to ensure efficient data extraction, transformation, and loading into data warehouses. - Data Modelling: Establish and maintain data models that support analytical reporting and data visualization initiatives. - Database Management: Employ SQL for crafting intricate queries, stored procedures, and managing data transformations via joins and cursors. - Visualization Development: Spearhead the design of interactive dashboards and reports in Power BI and Tableau while adhering to best practices in data visualization. - Collaboration: Engage closely with stakeholders to gather requirements and translate them into technical specifications and architecture designs. - Performance Optimization: Analyze and optimize BI solutions for enhanced performance, scalability, and reliability. - Data Governance: Implement data quality and governance best practices to ensure accurate reporting and compliance. - Team Leadership: Mentor and guide junior BI developers and analysts to cultivate a culture of continuous learning and improvement. - Azure Databricks: Utilize Azure Databricks for data processing and analytics to seamlessly integrate with existing BI solutions. Qualifications: - Bachelor's degree in computer science, Information Systems, Data Science, or a related field. - 6-12 years of experience in BI architecture and development, with a strong emphasis on Power BI and Tableau. - Proficiency in ETL processes and tools, particularly SSIS. Strong command over SQL Server, encompassing advanced query writing and database management. - Proficient in exploratory data analysis using Python. - Familiarity with the CRISP-DM model. - Ability to work with various data models and databases like Snowflake, Postgres, Redshift, and MongoDB. - Experience with visualization tools such as Power BI, QuickSight, Plotly, and Dash. - Strong programming foundation in Python for data manipulation, analysis, serialization, database interaction, data pipeline and ETL tools, cloud services, and more. - Familiarity with Azure SDK is a plus. - Experience with code quality management, version control, collaboration in data engineering projects, and interaction with REST APIs and web scraping tasks is advantageous. Calfus Inc. is an Equal Opportunity Employer.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

maharashtra

On-site

Are you seeking an exciting opportunity to become part of a dynamic and expanding team in a fast-paced and challenging environment This unique position offers you the chance to collaborate with the Business team to deliver a comprehensive perspective. As a Model Risk Program Analyst within the Model Risk Governance and Review Group (MRGR), your responsibilities include developing model risk policy and control procedures, conducting model validation activities, offering guidance on appropriate model usage in the business context, evaluating ongoing model performance testing, and ensuring that model users understand the strengths and limitations of the models. This role also presents attractive career paths for individuals involved in model development and validation, allowing them to work closely with Model Developers, Model Users, Risk and Finance professionals. Your key responsibilities will involve engaging in new model validation activities for all Data Science models in the coverage area. This includes evaluating the model's conceptual soundness, assumptions, reliability of inputs, testing completeness, numerical robustness, and performance metrics. You will also be responsible for conducting independent testing and additional model review activities, liaising with various stakeholders to provide oversight and guidance on model usage, controls, and performance assessment. To excel in this role, you should possess strong quantitative and analytical skills, preferably with a degree in a quantitative discipline such as Computer Science, Statistics, Data Science, Math, Economics, or Math Finance. A Master's or PhD degree is desirable. Additionally, you should have a solid understanding of Machine Learning and Data Science theory, techniques, and tools, including Python programming proficiency and experience with machine learning libraries such as Numpy, Scipy, Scikit-learn, TensorFlow, and PyTorch. Prior experience in Data Science, Quantitative Model Development, Model Validation, or Technology focused on Data Science, along with excellent writing and communication skills, will be advantageous. A risk and control mindset, with the ability to ask incisive questions, assess materiality, and escalate issues, are also essential for this role. By staying updated on the latest developments in your coverage area, you will contribute to maintaining the model risk control apparatus of the bank and serve as a key point of contact within the organization. Join our team and be a part of shaping the future of model-related risk management decisions.,

Posted 1 week ago

Apply

0.0 - 3.0 years

0 Lacs

vapi, gujarat

On-site

You are invited to join our team as an Applied AI Engineer (Fresher), specifically looking for graduates from IITs or NITs. This is a full-time position based in Vapi, Gujarat, offering an exciting opportunity to work within our dynamic AI Engineering team. The ideal candidate will possess a strong foundation in Data Structures & Algorithms, exceptional problem-solving skills, and a genuine enthusiasm for AI, machine learning, and deep learning technologies. As an Applied AI Engineer, you will collaborate closely with senior engineers and data scientists to design and implement AI-driven solutions. Your responsibilities will include developing, optimizing, and deploying machine learning and deep learning models, as well as translating abstract AI challenges into efficient, scalable, and production-ready code. Additionally, you will contribute to data preprocessing, feature engineering, and model evaluation tasks, participate in technical discussions and code reviews, and explore cutting-edge AI frameworks and research trends. Key Skills required for this role include exceptional problem-solving abilities, proficiency in Python and core libraries such as NumPy, Pandas, and scikit-learn, a fundamental understanding of machine learning concepts, exposure to deep learning frameworks like TensorFlow or PyTorch, and a strong grasp of object-oriented programming and software engineering principles. A passion for AI, along with analytical, logical thinking, and mathematical skills, will be essential for success in this position. Candidates with hands-on experience in AI/ML projects, familiarity with Reinforcement Learning, NLP, or Computer Vision, and knowledge of tools like Git, Docker, and cloud platforms (AWS, GCP, Azure) are highly preferred. Educational qualifications include a degree in Computer Science, Artificial Intelligence, Machine Learning, or Data Science from IITs or NITs, with a strong academic record and demonstrated interest in AI/ML concepts. If you meet these requirements and are excited about contributing to the future of intelligent systems, we encourage you to apply by sharing your CV with us at jignesh.pandoriya@merillife.com. Join our team and be part of shaping innovative solutions that directly impact lives.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

kolkata, west bengal

On-site

You must have knowledge in Azure Datalake, Azure function, Azure Databricks, Azure Data Factory, and PostgreSQL. Working knowledge in Azure DevOps and Git flow would be an added advantage. Alternatively, you should have working knowledge in AWS Kinesis, AWS EMR, AWS Glue, AWS RDS, AWS Athena, and AWS RedShift. Demonstrable expertise in working with timeseries data is essential. Experience in delivering data engineering/data science projects in Industry 4.0 is an added advantage. Knowledge of Palantir is required. You must possess strong problem-solving skills with a focus on sustainable and reusable development. Proficiency in using statistical computer languages like Python/PySpark, Pandas, Numpy, seaborn/matplotlib is necessary. Knowledge in Streamlit.io is a plus. Familiarity with Scala, GoLang, Java, and big data tools such as Hadoop, Spark, Kafka is beneficial. Experience with relational databases like Microsoft SQL Server, MySQL, PostGreSQL, Oracle, and NoSQL databases including Hadoop, Cassandra, MongoDB is expected. Proficiency in data pipeline and workflow management tools like Azkaban, Luigi, Airflow is required. Experience in building and optimizing big data pipelines, architectures, and data sets is crucial. You should possess strong analytical skills related to working with unstructured datasets. Provide innovative solutions to data engineering problems, document technology choices, and integration patterns. Apply best practices for project delivery with clean code. Demonstrate innovation and proactiveness in meeting project requirements. Reporting to: Director- Intelligent Insights and Data Strategy Travel: Must be willing to be deployed at client locations worldwide for long and short terms, flexible for shorter durations within India and abroad.,

Posted 1 week ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

You are a highly skilled, detail-oriented, and motivated Python DQ Automation Developer who will be responsible for designing, developing, and maintaining data quality automation solutions using Python. With a deep understanding of data quality principles, proficiency in Python, and experience in data processing and analysis, you will play a crucial role in ensuring accurate and timely data integration and transformation. Your key responsibilities will include designing, developing, and implementing data quality automation processes and solutions to identify, measure, and improve data quality. You will write and optimize Python scripts using libraries such as Pandas, NumPy, and PySpark for data manipulation and processing. Additionally, you will develop and enhance ETL processes, analyze data sets to identify data quality issues, and develop and execute test plans to validate the effectiveness of data quality solutions. As a part of the team, you will maintain comprehensive documentation of data quality processes, procedures, and standards, and collaborate closely with data analysts, data engineers, DQ testers, and other stakeholders to understand data requirements and deliver high-quality data solutions. Required Skills: - Proficiency in Python and related libraries (Pandas, NumPy, PySpark, pyTest). - Experience with data quality tools and frameworks. - Strong understanding of ETL processes and data integration. - Familiarity with data governance and data management principles. - Excellent analytical and problem-solving skills with a keen attention to detail. - Strong verbal and written communication skills to explain technical concepts to non-technical stakeholders. - Ability to work effectively both independently and as part of a team. Qualifications: - Bachelor's degree in computer science or Information Technology. An advanced degree is a plus. - Minimum of 7 years of experience in data quality automation and Python Development. - Proven experience with Python libraries for data processing and analysis. Citi is an equal opportunity and affirmative action employer, encouraging all qualified and interested applicants to apply for career opportunities. If you require a reasonable accommodation due to a disability, please review Accessibility at Citi.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

indore, madhya pradesh

On-site

As a Senior Data Scientist with 5+ years of experience, you will play a crucial role in our team based in Indore/Pune. Your responsibilities will involve designing and implementing models, extracting insights from data, and interpreting complex data structures to facilitate business decision-making. You should have a strong background in Machine Learning areas such as Natural Language Processing, Machine Vision, Time Series, etc. Your expertise should extend to Model Tuning, Model Validation, Supervised and Unsupervised Learning. Additionally, hands-on experience with model development, data preparation, and deployment of models for training and inference is essential. Proficiency in descriptive and inferential statistics, hypothesis testing, and data analysis and exploration are key skills required for this role. You should be adept at developing code that enables reproducible data analysis. Familiarity with AWS services like Sagemaker, Lambda, Glue, Step Functions, and EC2 is expected. Knowledge of data science code development and deployment IDEs such as Databricks, Anaconda distribution, and similar tools is essential. You should also possess expertise in ML algorithms related to time-series, natural language processing, optimization, object detection, topic modeling, clustering, and regression analysis. Your skills should include proficiency in Hive/Impala, Spark, Python, Pandas, Keras, SKLearn, StatsModels, Tensorflow, and PyTorch. Experience with end-to-end model deployment and production for at least 1 year is required. Familiarity with Model Deployment in Azure ML platform, Anaconda Enterprise, or AWS Sagemaker is preferred. Basic knowledge of deep learning algorithms like MaskedCNN, YOLO, and visualization and analytics/reporting tools such as Power BI, Tableau, Alteryx would be advantageous for this role.,

Posted 1 week ago

Apply

3.0 - 8.0 years

0 Lacs

karnataka

On-site

As a ML Engineer/Data Scientist at Innova ESI in Bengaluru, you will be responsible for leveraging your strong background in Computer Science and Algorithms to work on pattern recognition, neural networks, algorithms, and statistical methods in order to deliver end-to-end data solutions. Your role will involve collaborating in cross-functional teams to drive digital transformation through innovative and sustainable IT solutions. You should have a deep understanding of machine learning and NLP, along with commercial experience in building and deploying LLM solutions. With 4-8 years of relevant experience, you are expected to demonstrate exceptional coding ability, particularly in Python, and be familiar with ML tools and libraries like TensorFlow, PyTorch, pandas, and scikit-learn. Your proficiency in Pattern Recognition and Neural Networks, combined with knowledge of Statistics and its application in data analysis, will be key in working with complex algorithms and data models. In addition, you should have a strong understanding of MLOps best practices and experience in deploying LLM applications to production at scale. As a problem solver and self-starter, you enjoy tackling difficult problems and finding optimal solutions independently as well as in a collaborative team environment. Your strong communication skills will enable you to effectively articulate complex technical concepts to non-technical stakeholders. This full-time hybrid role offers flexibility for some remote work while being primarily based in Bengaluru. If you meet the qualifications and are excited about driving digital transformation through innovative data solutions, we encourage you to share your resume with us at jaya.sharma@innovaesi.com.,

Posted 1 week ago

Apply

3.0 - 5.0 years

7 - 14 Lacs

Hyderabad

Work from Office

Role & responsibilities - Strong knowledge in OOPs and creating custom python packages for serverless applications. Strong Knowledge in SQL querying. Hands on experience in AWS services like Lambda, EC2, EMR, S3 and Athena, Batch, Textract and Comprehend. Strong Expertise in extracting text, tables, logos from low quality scanned multipage pdfs (80 - 150 dpi) and Images. Good Understanding in Probability and Statistics concepts and ability to find hidden patterns, relevant insights from the data. Knowledge in applying state of art NLP models like BERT, GPT - x, sciSpacy, Bidirectional LSTMs-CNN, RNN, AWS medical Comprehend for Clinical Named Entity Recognition (NER). Strong Leadership Skills. Deployment of custom trained and prebuilt NER models using AWS Sagemaker. Knowledge in setting up AWS Textract pipeline for large scale text processing using AWS SNS, AWS SQS, Lambda and EC2. Should have Intellectual curiosity to learn new things. ISMS responsibilities should be followed as per company policy. Preferred candidate profile - 3+ years Hands on in Python and data science like tools pandas, NumPy, SciPy, matplotlib and strong exposure in regular expressions. 3+ years Hands on experience in Machine learning algorithms like SVM, CART, Bagging and Boosting algorithms, NLP based ML algorithms and Text mining. Hands on expertise to integrate multiple data sources in a streamlined pipeline.

Posted 1 week ago

Apply

5.0 - 8.0 years

15 - 20 Lacs

Noida

Work from Office

Technical Expertise: Must Have Proficient in Java (must have ) . Hands-on experience with AWS AI services such as AWS Bedrock and Sagemaker. Good to have Python(good to have) programming with experience in libraries like TensorFlow, PyTorch, NumPy, and Pandas Experience with DevOps practices, including CI/CD pipelines, system monitoring, and troubleshooting in a production environment. Familiarity with other platforms like Google Gemini, and Copilot technologies. Soft Skills: Excellent communication and collaboration skills, with the ability to work effectively with stakeholders across business and technical teams. Strong problem-solving and analytical skills. Ability to work with teams in a dynamic, fast-paced environment. Key Responsibilities: Design, develop, and implement AI and machine learning models using AWS AI services like AWS bedrock and Sagemaker. Fine-tune and optimize AI models for business use cases. Implement Generative AI solutions using AWS Bedrock and Java. Write efficient, clean, and maintainable Java/Python code for AI applications. Develop and deploy RESTful APIs using frameworks like Flask or Django for model integration and consumption. Experience: 5 to 8 years of experience in software development, with 3+ years in AI/ML or Generative AI projects. Demonstrated experience in deploying and managing AI applications in production environments. Mandatory Competencies Programming Language - Java - Core Java (java 8+) Data Science and Machine Learning - Data Science and Machine Learning - Gen AI Data Science and Machine Learning - Data Science and Machine Learning - Python Beh - Communication and collaboration Cloud - AWS - AWS Lambda,AWS EventBridge, AWS Fargate Middleware - API Middleware - Microservices

Posted 1 week ago

Apply

1.0 - 4.0 years

5 - 9 Lacs

Noida

Work from Office

We are looking for a skilled Python Developer with expertise in Django to join our team at NextGen Web Services. The ideal candidate will have 1 to 4 years of experience and be available to work remotely. Roles and Responsibility Design, develop, and test software applications using Python and Django. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop high-quality, scalable, and efficient code. Troubleshoot and resolve technical issues efficiently. Participate in code reviews and contribute to improving overall code quality. Stay updated with industry trends and emerging technologies. Job Requirements Proficiency in Python programming language. Experience with Django framework. Strong understanding of software development principles and methodologies. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills. Additional Info The company offers a dynamic and supportive work environment, with opportunities for professional growth and development.

Posted 1 week ago

Apply

3.0 - 5.0 years

10 - 14 Lacs

Noida

Work from Office

We are looking for a highly skilled AI & ML Engineer with 3 to 5 years of experience to join our team at Stanra Tech Solutions. The ideal candidate will have a strong background in artificial intelligence and machine learning, with excellent problem-solving skills. Roles and Responsibility Design and develop AI and ML models to solve complex problems. Collaborate with cross-functional teams to integrate AI and ML solutions into existing systems. Develop and maintain large-scale data pipelines and architectures. Conduct research and stay updated on the latest trends and technologies in AI and ML. Work closely with stakeholders to understand business requirements and develop tailored solutions. Ensure high-quality code and adhere to best practices. Job Requirements Strong proficiency in programming languages such as Python, Java, or C++. Experience with deep learning frameworks like TensorFlow or PyTorch. Knowledge of computer vision, natural language processing, or reinforcement learning. Excellent problem-solving skills and attention to detail. Strong communication and collaboration skills. Ability to work in a fast-paced environment and meet deadlines.

Posted 1 week ago

Apply

3.0 - 6.0 years

7 - 11 Lacs

Bengaluru

Work from Office

You will be responsible for the deployment and maintenance of the group data science platform infrastructure, on which data science pipelines are deployed and scaled. To achieve this, you will collaborate with Data Scientists and Data Engineers from various business lines and the Global Technology Service infrastructure team (GTS). Roles : Implement techniques and processes for supporting the development and scaling of data science pipelines. Industrialize inference, retraining, monitoring data science pipelines, ensuring their maintainability and compliance. Provide platform support to end-users. Be attentive to the needs and requirements expressed by the end-users. Anticipate needs and necessary developments for the platform. Work closely with Data Scientists, Data Engineers, and business stakeholders. Stay updated and demonstrate a keen interest in the ML OPS domain. Environment : Cloud on-premise, AZure Python, Kubernetes Integrated vendor solutions: Dataiku, Snowflake DB: PostGreSQL Distributed computing: Spark Big Data: Hadoop, S3/Scality, MAPR Datascience: Scikit-learn, Transformers, ML Flow, Kedro, DevOps, CI/CD: JFROG, Harbor, Github Actions, Jenkins Monitoring: Elastic Search/Kibana, Grafana, Zabbix Agile ceremonies: PI planning, Sprint, Sprint Review, Refinement, Retrospectives, ITIL framework Technical Skills : Python FastApi, SqlAlchemy Numpy, Pandas, Scikit-learn, Transformers Kubernetes, Docker Pytest CI/CD: Jenkins, Ansible, GitHub Action, Harbor, Docker Soft Skills : Client Focus: Demonstrate strong listening skills, understanding, and anticipation of user needs. Team Spirit: Organize collaboration, workshops to find the best solutions. Share expertise with colleagues to find the most suitable solutions. Innovation: Propose innovative ideas, solutions, or strategies, and think out the box. Prefer simplicity over complexity. Responsibility: Take ownership, keep commitments and respect deadlines

Posted 1 week ago

Apply

1.0 - 5.0 years

4 - 7 Lacs

Bengaluru

Work from Office

Developing Image Processing/Computer Vision Applications to be deployed in Vision Inspection. More specifically: Write OpenCV applications in C++ for dimensional and surface inspection. Train DL models for surface inspection in cases where traditional OpenCV / C++ is not viable. Additional Responsibilities: Developing and optimizing OpenCV applications for embedded devices (raspberry PI). Assist in developing DL models for the inhouse DL cloud platform. Educational Qualification B. E/ B. Tech in Any Specialization. Skills Python. C++. OpenCV. Linux. PyTorch/TensorFlow. Who can apply Are available for the full time employment with one year contract. Can start immediately. Have relevant skills and interests. Perks: Certificate. Letter of recommendation. Job offer. Group Health Insurance. Incentives and Bonus as per HR policy.

Posted 1 week ago

Apply

0.0 - 3.0 years

3 - 5 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Job Overview: We are looking for a curious, analytical, and technically skilled Data Science Engineer with 03 years of experience to join our growing data team. This role is ideal for recent graduates or junior professionals eager to work on real-world machine learning and data engineering challenges. You will help develop data-driven solutions, design models, and deploy scalable data pipelines that support business decisions and product innovation. Key Responsibilities: Assist in designing and deploying machine learning models and predictive analytics solutions. Build and maintain data pipelines using tools such as Airflow, Spark, or Pandas. Conduct data wrangling, cleansing, and feature engineering on large datasets. Collaborate with data scientists, analysts, and engineers to operationalize models in production. Develop dashboards, reports, or APIs to expose model insights to stakeholders. Continuously monitor model performance and data quality. Stay updated with new tools, technologies, and industry trends in AI and data science. Required Skills & Qualifications: Bachelors or Masters degree in Computer Science, Data Science, Statistics, Engineering, or a related field. 0–3 years of hands-on experience in data science, machine learning, or data engineering (internships and academic projects welcome). Proficiency in Python and data science libraries (e.g., pandas, NumPy, scikit-learn, matplotlib). Familiarity with SQL and working with relational databases. Understanding of fundamental machine learning concepts and algorithms. Knowledge of version control systems (e.g., Git). Strong problem-solving skills and a willingness to learn. Nice-to-Have: Exposure to ML frameworks like TensorFlow, PyTorch, or XGBoost. Experience with cloud platforms (AWS, GCP, or Azure). Familiarity with MLOps tools like MLflow, Kubeflow, or SageMaker. Understanding of big data tools (e.g., Spark, Hadoop). Experience working on data science projects or contributions on GitHub/Kaggle. What We Offer: Real-world experience with data science in production environments Mentorship and professional development support Access to modern tools, technologies, and cloud platforms Competitive salary with performance incentives A collaborative and learning-focused culture Flexible work options (remote/hybrid) How to Apply: Send your updated resume to careers@jasra.in

Posted 1 week ago

Apply

6.0 - 11.0 years

20 - 30 Lacs

Bhopal, Hyderabad, Pune

Hybrid

Hello Greetings from NewVision Software!! We are hiring on an immediate basis for the role of Senior / Lead Python Developer + AWS | NewVision Software | Pune, Hyderabad & Bhopal Location | Fulltime Looking for professionals who can join us Immediately or within 15 days is preferred. Please find the job details and description below. NewVision Software PUNE HQ OFFICE 701 &702, Pentagon Tower, P1, Magarpatta City, Hadapsar, Pune, Maharashtra - 411028, India NewVision Software The Hive Corporate Capital, Financial District, Nanakaramguda, Telangana - 500032 NewVision Software IT Plaza, E-8, Bawadiya Kalan Main Rd, near Aura Mall, Gulmohar, Fortune Pride, Shahpura, Bhopal, Madhya Pradesh - 462039 Senior Python and AWS Developer Role Overview: We are looking for a skilled senior Python Developer with strong background in AWS cloud services to join our team. The ideal candidate will be responsible for designing, developing, and maintaining robust backend systems, ensuring high performance and responsiveness to requests from the front end. Responsibilities : Develop, test, and maintain scalable web applications using Python and Django. Design and manage relational databases with PostgreSQL, including schema design and optimization. Build RESTful APIs and integrate with third-party services as needed. Work with AWS services including EC2, EKS, ECR, S3, Glue, Step Functions, EventBridge Rules, Lambda, SQS, SNS, and RDS. Collaborate with front-end developers to deliver seamless end-to-end solutions. Write clean, efficient, and well-documented code following best practices. Implement security and data protection measures in applications. Optimize application performance and troubleshoot issues as they arise. Participate in code reviews, testing, and continuous integration processes. Stay current with the latest trends and advancements in Python, Django, and database technologies. Mentor junior python developers. Requirements : 6+ years of professional experience in Python development. Strong proficiency with Django web framework. Experience working with PostgreSQL, including complex queries and performance tuning. Familiarity with RESTful API design and integration. Strong understanding of OOP, SOLID principles, and design patterns. Strong knowledge of Python multithreading and multiprocessing. Experience with AWS services: S3, Glue, Step Functions, EventBridge Rules, Lambda, SQS, SNS, IAM, Secret Manager, KMS and RDS. Understanding of version control systems (Git). Knowledge of security best practices and application deployment. Basic understanding of Microservices architecture. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Nice to Have Experience with Docker, Kubernetes, or other containerization tools. Good to have front-end technologies (React). Experience with CI/CD pipelines and DevOps practices. Experience with infrastructure as code tools like Terraform. Education : Bachelors degree in computer science engineering or related field (or equivalent experience). Do share your resume with my email address: imran.basha@newvision-software.com Please share your experience details: Total Experience: Relevant Experience: Exp: Python: Yrs, AWS: Yrs, PostgreSQL: Yrs Rest API: Yrs, Django: Current CTC: Expected CTC: Notice / Serving (LWD): Any Offer in hand: LPA Current Location Preferred Location: Education: Please share your resume and the above details for Hiring Process: - imran.basha@newvision-software.com

Posted 1 week ago

Apply

3.0 - 5.0 years

9 - 13 Lacs

Gurugram

Work from Office

Job Summary Synechron is seeking a detail-oriented Data Analyst to leverage advanced data analysis, visualization, and insights to support our business objectives. The ideal candidate will have a strong background in creating interactive dashboards, performing complex data manipulations using SQL and Python, and automating workflows to drive efficiency. Familiarity with cloud platforms such as AWS is a plus, enabling optimization of data storage and processing solutions. This role will enable data-driven decision-making across teams, contributing to strategic growth and operational excellence. Software Requirements Required: PowerBI (or equivalent visualization tools like Streamlit, Dash) SQL (for data extraction, manipulation, and querying) Python (for scripting, automation, and advanced analysis) Data management tools compatible with cloud platforms (e.g., AWS S3, Redshift, or similar) Preferred: Cloud platform familiarity, especially AWS services related to data storage and processing Knowledge of other visualization platforms (Tableau, Looker) Familiarity with source control systems (e.g., Git) Overall Responsibilities Develop, redesign, and maintain interactive dashboards and visualization tools to provide actionable insights. Perform complex data analysis, transformations, and validation using SQL and Python. Automate data workflows, reporting, and visualizations to streamline processes. Collaborate with business teams to understand data needs and translate them into effective visual and analytical solutions. Support data extraction, cleaning, and validation from various sources, ensuring data accuracy. Maintain and enhance understanding of cloud environments, especially AWS, to optimize data storage, processing pipelines, and scalability. Document technical procedures and contribute to best practices for data management and reporting. Performance Outcomes: Timely, accurate, and insightful dashboards and reports. Increased automation reducing manual effort. Clear communication of insights and data-driven recommendations to stakeholders. Technical Skills (By Category) Programming Languages: Essential: SQL, Python Preferred: R, additional scripting languages Databases/Data Management: Essential: Relational databases (SQL Server, MySQL, Oracle) Preferred: NoSQL databases like MongoDB, cloud data warehouses (AWS Redshift, Snowflake) Cloud Technologies: Essential: Basic understanding of AWS cloud services (S3, EC2, RDS) Preferred: Experience with cloud-native data solutions and deployment Frameworks and Libraries: Python: Pandas, NumPy, Matplotlib, Seaborn, Plotly, Streamlit, Dash Visualization: PowerBI, Tableau (preferred) Development Tools and Methodologies: Version control: Git Automation tools for workflows and reporting Familiarity with Agile methodologies Security Protocols: Awareness of data security best practices and compliance standards in cloud environments Experience Requirements 3-5 years of experience in data analysis, visualization, or related data roles. Proven ability to deliver insightful dashboards, reports, and analysis. Experience working across teams and communicating complex insights clearly. Knowledge of cloud environments like AWS or other cloud providers is desirable. Experience in a business environment, not necessarily as a full-time developer, but as an analytical influencer. Day-to-Day Activities Collaborate with stakeholders to gather requirements and define data visualization strategies. Design and maintain dashboards using PowerBI, Streamlit, Dash, or similar tools. Extract, transform, and analyze data using SQL and Python scripts. Automate recurring workflows and report generation to improve operational efficiencies. Troubleshoot data issues and derive insights to support decision-making. Monitor and optimize cloud data storage and processing pipelines. Present findings to business units, translating technical outputs into actionable recommendations. Qualifications Bachelors degree in Computer Science, Data Science, Statistics, or related field. Masters degree is a plus. Relevant certifications (e.g., PowerBI, AWS Data Analytics) are advantageous. Demonstrated experience with data visualization and scripting tools. Continuous learning mindset to stay updated on new data analysis trends and cloud innovations. Professional Competencies Strong analytical and problem-solving skills. Effective communication, with the ability to explain complex insights clearly. Collaborative team player with stakeholder management skills. Adaptability to rapidly changing data or project environments. Innovative mindset to suggest and implement data-driven solutions. Organized, self-motivated, and capable of managing multiple priorities efficiently.

Posted 1 week ago

Apply

6.0 - 9.0 years

15 - 25 Lacs

Pune, Chennai, Bengaluru

Hybrid

Hi Everyone, Experience : 6-9yrs Work Mode: Hybrid Work location : Chennai/Bangalore/Pune Location Notice Period : Immediate - 30 days Role : Data Scientist Skills and Experience: 5 year with Data science and ML exposure Key Accountabilities & Responsibilities Support the Data Science team with the development of advanced analytics/machine learning/artificial intelligence initiatives Analyzing large and complex datasets to uncover trends and insights. Supporting the development of predictive models and machine learning workflows. Performing exploratory data analysis to guide product and business decisions. Collaborating with cross-functional teams, including product, marketing, and engineering. Assisting with the design and maintenance of data pipelines. Clearly documenting and communicating analytical findings to technical and non-technical stakeholders. Basic Qualifications: Qualificiation in Data Science, Statistics, Computer Science, Mathematics, or a related field. Proficiency in Python and key data science libraries (e.g., pandas, NumPy, scikit-learn). Operational understanding of machine learning principles and statistical modeling. Experience with SQL for data querying. Strong communication skills and a collaborative mindset. Preferred Qualifications Exposure to cloud platforms such as AWS, GCP, or Azure. Familiarity with data visualization tools like Tableau, Power BI, or matplotlib. Participation in personal data science projects or online competitions (e.g., Kaggle). Understanding of version control systems like Git. Kindly, share the following details : Updated CV Relevant Skills Total Experience Current Company Current CTC Expected CTC Notice Period Current Location Preferred Location

Posted 1 week ago

Apply

3.0 - 6.0 years

10 - 20 Lacs

Gurugram

Hybrid

Role & responsibilities Highly focused individual with self-driven attitude Problem solving and logical thinking to automate and improve internal processes Using various tools such as SQL and Python for managing the various requirements for different data asset projects. Ability to diligently involve in activities like Data Cleaning, Retrieval, Manipulation, Analytics and Reporting Using data science and statistical techniques to build machine learning models and deal with textual data. Keep up-to-date knowledge of the industry and related markets Ability to multitask, prioritize, and manage time efficiently Understand needs of the hiring organization or client in order to target solutions to their benefit Advanced speaking and writing skills for effective communication Ability to work in cross functional teams demonstrating high level of commitment and coordination Attention to details and commitment to accuracy for the desired deliverable Should demonstrate and develop a sense of ownership towards the assigned task Ability to keep sensitive business information confidential Contribute, positively and extensively towards building the organizational reputation, brand and operational excellence Preferred candidate profile 3-6 years of relevant experience in data science Advanced knowledge of statistics and basics of machine learning Experienced in dealing with textual data and using natural language processing techniques Ability to conduct analysis to extract actionable insights Technical skills in Python (Numpy, Pandas, NLTK, transformers, Spacy), SQL and other programming languages for dealing with large datasets Experienced in data cleaning, manipulation, feature engineering and building models Experienced in the end-to-end development of a data science project Strong interpersonal skills and extremely resourceful Proven ability to complete assigned task according to the outlined scope and timeline Good language, communication and writing skills in English Expertise in using tools like MS Office, PowerPoint, Excel and Word Graduate or Post-graduate from a reputed college or university

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies