Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
1.0 - 3.0 years
3 - 6 Lacs
Bengaluru
Remote
Responsibilities : - Gather data from various sources (databases, spreadsheets, APIs, etc. - Identify, clean, and transform data to ensure accuracy, consistency, and integrity. - Develop and maintain data pipelines and processes for efficient data handling. - Conduct exploratory data analysis to identify trends, patterns, correlations, and anomalies. - Apply statistical techniques and data visualization tools to analyze datasets. - Interpret data and provide meaningful insights and recommendations. - Develop and maintain reports, dashboards, and other data visualizations to communicate findings effectively. - Work closely with stakeholders from different departments to understand their data needs and business questions. - Present data findings and insights in a clear and concise manner to both technical and non-technical audiences. - Participate in discussions and contribute to data-driven decision-making processes. - Document data sources, methodologies, and analysis processes. - Utilize data analysis tools and software such as Excel, SQL, and data visualization platforms (i.e. Tableau, Power BI). - Learn and adapt to new data analysis tools and technologies as needed. - May involve basic scripting or programming for data manipulation (i.e. Python). - Identify opportunities to improve data collection, analysis, and reporting processes. - Stay updated on the latest trends and best practices in data analysis. - Contribute to the development of data governance and quality standards. Qualifications : - Bachelor's degree in a quantitative field such as Statistics, Mathematics, Economics, Computer Science, or a related discipline. - 1-3 years of professional experience in a data analysis role. - Strong understanding of statistical concepts and data analysis techniques. - Proficiency in SQL for querying and manipulating data from databases. - Excellent skills in Microsoft Excel, including advanced formulas and data manipulation techniques. - Experience with at least one data visualization tool such as Tableau, Power BI, or similar. - Ability to interpret data, identify patterns, and draw meaningful conclusions. - Strong analytical and problem-solving skills. - Excellent communication and presentation skills, with the ability to explain technical findings to non-technical audiences. - Strong attention to detail and a commitment to data accuracy. - Ability to work independently and as part of a team. Preferred Skills : - Experience with programming languages such as Python (especially libraries like Pandas, NumPy, Matplotlib, Seaborn). - Familiarity with cloud-based data platforms (i.e., AWS, Azure, GCP). - Experience with data warehousing concepts. - Knowledge of statistical software packages (i.e., R, SPSS). - Experience with different data modeling techniques. - Exposure to machine learning concepts. - Experience working with specific industry data (i.e., marketing, sales, finance)
Posted 3 weeks ago
12.0 - 17.0 years
15 - 20 Lacs
Pune
Hybrid
Here is how, through this exciting role, YOU will contribute to BMC's and your own success: Play a vital role in project design to ensure scalability, reliability, and performance are met Design and develop new features as well as maintain existing features by adding improvements and fixing defects in complex areas (using Java) Design and maintain robust AI/ML pipelines using Python and industry-standard ML frameworks . Collaborate closely with AI researchers and data scientists to implement, test, and deploy machine learning models in production. Leverage prompt engineering techniques and implement LLM optimization strategies to enhance response quality and performance. Assist in troubleshooting complex technical problems in development and production Implement methodologies, processes tools Initiate projects and ideas to improve the teams results On-board and mentor new employees To ensure youre set up for success, you will bring the following skillset experience: Backend Development: FastAPI, RESTful APIs, Python Cloud Infrastructure: AWS, EKS, Docker, Kubernetes AI/ML Frameworks: LangChain, Scikit-learn, Bedrock, Hugging Face (optional to add if applicable) ML Pipelines: Python, Pandas, NumPy, joblib DevOps CI/CD: Git, Terraform (optional), Helm, GitHub Actions LLM Expertise: Prompt engineering, RAG (Retrieval Augmented Generation), vector databases (e.g., FAISS, Pinecone) You have 12+ years of experience in Java Backend development You have experience as a Backend Tech Lead You have experience in Spring, Swagger, REST API You worked with Spring Boot, Docker, Kubernetes You are a self-learner whos passionate about problem solving and technology You are a team player with good communication skills in English (verbal and written) Whilst these are nice to have, our team can help you develop in the following skills: Public Cloud (AWS, Azure, GCP) Python, Node.js, C/C++ Automation Frameworks such as Robot Framework
Posted 3 weeks ago
15.0 - 17.0 years
29 - 34 Lacs
Chennai
Work from Office
Job Summary We are seeking an AI Leader for building Guardrail Platform to drive the design, deployment, and governance of AI guardrails that ensure ethical, responsible, and compliant AI operations . This role involves collaborating with cross-functional teams to implement AI fairness, explainability, bias mitigation, security, and regulatory compliance frameworks across AI/ML pipelines. Roles & Responsibilities AI Guardrail Strategy & Implementation Define and implement AI guardrails to ensure ethical AI development, risk mitigation, and compliance. Establish automated monitoring for AI fairness, bias detection, and explainability. Lead the operationalization of Responsible AI (RAI) principles across the organization. AI Risk & Compliance Management Align AI models with regulatory standards (e.g., GDPR, AI Act, CCPA, NIST AI RMF). Develop governance frameworks for model validation, auditing, and risk assessment . Collaborate with legal, compliance, and security teams to ensure AI transparency. AI Model Security & Reliability Implement guardrails against adversarial attacks, data poisoning, and model drift . Establish secure AI deployment standards to prevent unauthorized AI model access or misuse. Establish DevSecOps pipeline teams to integrate AI security best practices . Operationalization & AI Governance Define AI monitoring KPIs for continuous risk assessment and compliance tracking. Develop automated pipelines to flag high-risk AI behaviors and decision anomalies. Foster a culture of explainable AI (XAI) & transparency for AI-driven decision-making. Cross-functional Leadership & Innovation Partner with product, legal, and engineering teams to integrate AI guardrails into MLOps workflows . Stay ahead of AI regulatory trends, industry best practices, and emerging risks . Competencies Required Skills Education Skills (NOT TO BE USED)
Posted 3 weeks ago
4.0 - 5.0 years
10 - 14 Lacs
Bengaluru
Work from Office
About the Role - We are seeking a highly skilled and experienced Senior Data Scientist to join our data science team. - As a Senior Data Scientist, you will play a critical role in driving data-driven decision making across the organization by developing and implementing advanced analytical solutions. - You will leverage your expertise in data science, machine learning, and statistical analysis to uncover insights, build predictive models, and solve complex business challenges. Key Responsibilities - Develop and implement statistical and machine learning models (e.g., regression, classification, clustering, time series analysis) to address business problems. - Analyze large and complex datasets to identify trends, patterns, and anomalies. - Develop predictive models for forecasting, churn prediction, customer segmentation, and other business outcomes. - Conduct A/B testing and other experiments to optimize business decisions. - Communicate data insights effectively through visualizations, dashboards, and presentations. - Develop and maintain interactive data dashboards and reports. - Present findings and recommendations to stakeholders in a clear and concise manner. - Work with data engineers to design and implement data pipelines and data warehousing solutions. - Ensure data quality and integrity throughout the data lifecycle. - Develop and maintain data pipelines for data ingestion, transformation, and loading. - Stay up-to-date with the latest advancements in data science, machine learning, and artificial intelligence. - Research and evaluate new technologies and tools to improve data analysis and modeling capabilities. - Explore and implement new data science techniques and methodologies. - Collaborate effectively with data engineers, business analysts, product managers, and other stakeholders. - Communicate technical information clearly and concisely to both technical and non-technical audiences. Qualifications Essential - 4+ years of experience as a Data Scientist or in a related data science role. - Strong proficiency in statistical analysis, machine learning algorithms, and data mining techniques. - Experience with programming languages like Python (with libraries like scikit-learn, pandas, NumPy) or R. - Experience with data visualization tools (e.g., Tableau, Power BI). - Experience with data warehousing and data lake technologies. - Excellent analytical, problem-solving, and communication skills. - Master's degree in Statistics, Mathematics, Computer Science, or a related field
Posted 3 weeks ago
4.0 - 5.0 years
7 - 11 Lacs
Bengaluru
Work from Office
: As a Banking Data Analyst, you will play a critical role in our clients' data-driven decision-making processes. You will work closely with marketing teams, product managers, and other stakeholders to understand their data needs and provide relevant analyses. Your expertise in SQL, marketing analytics, and banking domain knowledge will be essential to your success. Responsibilities : - Perform in-depth data analysis using SQL to extract and analyze banking data. - Develop and maintain reports and dashboards to track key performance indicators (KPIs). - Identify trends and patterns in banking data to provide actionable insights. - Analyze marketing campaign performance and provide recommendations for optimization. - Segment customer data to identify target audiences for marketing initiatives. - Develop and implement customer lifetime value (CLTV) models. -
Posted 3 weeks ago
5.0 - 8.0 years
6 - 10 Lacs
Bengaluru
Work from Office
: As a Python Developer, you will play a critical role in our software development and data engineering initiatives. You will work closely with data engineers, architects, and other developers to build and maintain our applications and data pipelines. Your expertise in Python development, API design, and cloud technologies will be essential to your success. Responsibilities : - Design, develop, and maintain applications using the latest Python frameworks and technologies (Django, Flask, FastAPI). - Utilize Python libraries and tools (Pandas, NumPy, SQLAlchemy) for data manipulation and analysis. - Develop and maintain RESTful APIs, ensuring security, authentication, and authorization (OAuth, JWT). - Deploy, manage, and scale applications on AWS services (EC2, S3, RDS, Lambda). - Utilize infrastructure-as-code tools (Terraform, CloudFormation) for infrastructure management (Good to have). - Design and develop database solutions using PL/SQL (Packages, Functions, Ref cursors). - Implement data normalization and Oracle performance optimization techniques. - Design and develop data warehouse solutions, including data marts and ODS concepts. - Implement low-level design of warehouse solutions. - Work with Kubernetes for container orchestration, deploying, managing, and scaling applications on Kubernetes clusters.- - Utilize SnapLogic cloud-native integration platform for designing and implementing integration pipelines. Required Skills : - Expertise in Python frameworks (Django, Flask, FastAPI). - Proficiency in Python libraries (Pandas, NumPy, SQLAlchemy). - Strong experience in designing, developing, and maintaining RESTful APIs. - Familiarity with API security, authentication, and authorization mechanisms (OAuth, JWT). - Good experience and hands-on knowledge of PL/SQL (Packages/Functions/Ref cursors). - Knowledge of data normalization and Oracle performance optimization techniques. - Experience in development & low-level design of warehouse solutions. - Familiarity with Data Warehouse, Datamart and ODS concepts. - Proficiency in AWS services (EC2, S3, RDS, Lambda). Good to Have Skills : Kubernetes : - Hands-on experience with Kubernetes for container orchestration. Infrastructure as Code : - Experience with infrastructure-as-code tools (Terraform, CloudFormation). Integration Platforms : - Experience with SnapLogic cloud-native integration platform. Experience : - 5 to 8 years of experience as a Python Developer. Location : - Bangalore or Gurgaon
Posted 3 weeks ago
2.0 - 7.0 years
4 - 9 Lacs
Chennai
Work from Office
Greetings from Newt Global Good day!! Job Title: Senior Python developer Experience: 2+ years Location: Chennai Work location: Chennai (5 days WFO) Domain: Product development Key Responsibilities: Strong understanding of Python frameworks (Flask) and libraries (e.g., Pandas, NumPy, Scikit-learn). Proficiency in SQL and experience with databases like PostgreSQL/ Oracle (any other SQL databases) Experience with cloud services (AWS, Azure, or GCP) and containerization (Docker, Kubernetes) Added Advantage Demonstrated problem solving /Algorithms / Design patterns / Hackathon etc Good to have Machine learning, AI, Data science exposure (Scikit-learn, TensorFlow, PyTorch) Product development experience is preferred Qualifications: 2+ years of professional experience in Python backend development. Experience with RESTful APIs and microservices architecture. Bachelor's degree in Computer Science, Computer Engineering, or a related field. Regards, Elakkiya M ________________________________________ Sr Technical Recruiter | Newt Global Email: elakkiyam@newtglobalcorp.com Join my professional network by clicking here: https://www.linkedin.com/in/lucky157/
Posted 3 weeks ago
6.0 - 11.0 years
4 - 8 Lacs
Kolkata
Work from Office
Must have knowledge in Azure Datalake, Azure function, Azure Databricks, Azure data factory, PostgreSQL Working knowledge in Azure devops, Git flow would be an added advantage. (OR) SET 2 Must have working knowledge in AWS Kinesis, AWS EMR, AWS Glue, AWS RDS, AWS Athena, AWS RedShift. Should have demonstrable knowledge and expertise in working with timeseries data. Working knowledge in delivering data engineering / data science projects in Industry 4.0 is an added advantage. Should have knowledge on Palantir. Strong problem-solving skills with an emphasis on sustainable and reusable development. Experience using statistical computer languages to manipulate data and draw insights from large data sets Python/PySpark, Pandas, Numpy seaborn / matplotlib, Knowledge in Streamlit.io is a plus Familiarity with Scala, GoLang, Java would be added advantage. Experience with big data toolsHadoop, Spark, Kafka, etc. Experience with relational databases such as Microsoft SQL Server, MySQL, PostGreSQL, Oracle and NoSQL databases such as Hadoop, Cassandra, Mongo dB Experience with data pipeline and workflow management toolsAzkaban, Luigi, Airflow, etc Experience building and optimizing big data data pipelines, architectures and data sets. Strong analytic skills related to working with unstructured datasets. Primary Skills Provide innovative solutions to the data engineering problems that are faced in the project and solve them with technically superior code & skills. Where possible, should document the process of choosing technology or usage of integration patterns and help in creating a knowledge management artefact that can be used for other similar areas. Create & apply best practices in delivering the project with clean code. Should work innovatively and have a sense of proactiveness in fulfilling the project needs. Additional Information: Reporting to Director- Intelligent Insights and Data Strategy Travel Must be willing to be deployed at client locations anywhere in the world for long and short term as well as should be flexible to travel on shorter duration within India and abroad
Posted 3 weeks ago
10.0 - 19.0 years
22 - 37 Lacs
Chennai, Bengaluru
Hybrid
GenAI Architect - Fulltime Remote Shift:General Shift Job Summary: We are looking for a GenAI Architect with deep expertise in AI/Gen AI systems, specifically in designing and building large-scale AI solutions. As a GenAI Architect , you will be responsible for leading the design and architecture of cutting-edge Gen AI systems, with a focus on large language models (LLMs), AI frameworks, and advanced machine learning technologies. This is a senior role requiring strategic vision and hands-on expertise in AI solution architecture, software engineering, and MLOps practices. Responsibilities: Architecting AI Solutions : Design and architect scalable, efficient, and robust GenAI systems leveraging advanced machine learning frameworks, particularly in large language models (LLMs). Leading AI Framework Implementation : Lead the design and implementation of complex GenAI models and frameworks using Python and other relevant technologies. Provide strategic oversight of AI/GenAI architecture to align with business needs and tech stack. Model Optimization and Performance : Oversee the optimization of large-scale AI models for both training and inference, ensuring high performance and scalability across platforms. API Architecture & Development : Guide the development of high-performance APIs for AI services using FastAPI or other relevant frameworks, ensuring ease of integration with internal and external systems. MLOps Integration : Drive the integration of MLOps pipelines for seamless deployment, monitoring, and scaling of machine learning models, ensuring automation, efficiency, and high availability. Leadership and Mentorship : Provide technical leadership and mentorship to teams of engineers, guiding them in best practices for AI system architecture, software engineering, and AI model lifecycle management. Collaboration with Cross-Functional Teams : Work closely with Data Science, Data Engineering, DevOps, and Product teams to ensure AI/Gen AI solutions are architected in a way that meets both technical and business requirements. CI/CD & Automation : Design and implement automated pipelines for continuous integration and continuous deployment (CI/CD) of AI models, ensuring smooth and rapid updates to production environments. DevOps Practices : Ensure the robust deployment and management of AI models in production environments using DevOps tools like Docker, Kubernetes, Jenkins, and Terraform. Security and Compliance : Ensure AI systems adhere to best practices in security, privacy, and compliance, especially in handling sensitive data and large-scale model training. Key Requirements: Experience : Hands-on experience in AI/Gen AI system design, implementation, and scaling, with expertise in large language models (LLMs) and AI frameworks. Programming Skills : Expertise in Python and a strong understanding of data manipulation libraries like Pandas and NumPy. AI Frameworks : Experience in designing and managing large-scale AI models using cutting-edge frameworks such as TensorFlow, PyTorch, and similar. API Development : Strong experience in FastAPI or other modern frameworks for building high-performance APIs. Software Engineering : Proficient in software engineering principles, version control (Git), and maintaining code quality through automated testing and CI/CD pipelines. MLOps & ML Engineering : Deep understanding of MLOps practices for model deployment, monitoring, and optimization. Hands-on experience with ML engineering workflows. DevOps Expertise : Strong experience with DevOps tools like Docker, Kubernetes, Jenkins, and Terraform, with an emphasis on automating deployment and managing infrastructure at scale. Leadership & Mentorship : Proven experience in leading technical teams, making architectural decisions, and mentoring junior engineers.
Posted 3 weeks ago
2.0 - 6.0 years
5 - 15 Lacs
Chennai
Work from Office
Summary of the profile: Develops information systems by studying operations; designing, developing, and installing software solutions, supports and develops software team. What you'll do here: Develops software solutions by studying information needs; conferring with users; studying systems flow, data usage, and work processes; investigating problem areas; following the software development lifecycle. Implement and maintain Django-based applications Use server-side logic to integrate user-facing elements. Develop software related to asset management Write and implement software solutions that integrate different systems. Identify and suggest various opportunities to improve efficiency and functionality. Coordinating the workflow between the graphic designer, the HTML coder, and yourself Creating self-contained, reusable, and testable modules and components Continuously discover, evaluate, and implement new technologies to maximize development efficiency. Unit-test code for robustness, including edge cases, usability, and general reliability. Provides information by collecting, analyzing, and summarizing development and service issues. What you will need to thrive: 2 to 6 years of experience in Django, Python, APIs, Numpy, Pandas, PostgreSQL, Git, AWS, Docker, REST, NoSQL, MySQL, JavaScript Experience of working with Windows and Linux operating systems. Familiarity with Python modules like SQL Alchemy, Sanic, Flask, Django, Numpy, Pandas and Visualization modules Experience in writing tests with PyTest Strong knowledge of REST API Experience of working in an Agile environment, particularly with Scrum/Kanban processes The ability to deliver required functionality working with UI as well as database engineers Good team working skills are a must for this role Good problem solving abilities Accountability and strong attention to detail Working knowledge of GIT and GITHUB Education & Experience: Bachelors degree in IT Information Security or related field required; Masters degree Desirable. Core competencies: • Communication, Coding & Team management
Posted 3 weeks ago
4.0 - 5.0 years
10 - 14 Lacs
Chennai, Delhi / NCR, Bengaluru
Work from Office
Python Programming: Proficiency in Python, Working with libraries like TensorFlow, PyTorch,Transformers,NLTK,Pandas,sklearn, and other related libraries used in NLP tasks and fine tuning language models. Experience with building comprehensive python modules for NLP tasks like tokenization, word embeddings,classifications. Evaluating and selecting the open source and/or commercial LLMs suitable for financial lending domains Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad
Posted 3 weeks ago
3.0 - 5.0 years
5 - 8 Lacs
Gurugram
Work from Office
This role is essential for advancing the strategic objectives of the insurance actuarial function by implementing analytics and transformational initiatives. The team is primarily responsible for innovation, research & development. Utilizing high end analytical & technical skills, team members design models/methodologies tailored for actuarial reserving processes, as well as analyze financial data to generate actionable insights that aid decision-making. This position offers an exciting opportunity to participate in a range of analytics projectsdescriptive, diagnostic, predictive, and prescriptivewhile also focusing on Artificial Intelligence and cloud migration, thereby enhancing the organizations ability to adapt to emerging business needs. DISCOVER your opportunity What will your essential responsibilities include? Implementation of analytics projects and other transformational projects that directly impact the organization's strategic objectives. Run tools based in Python, R, SQL, and execute ETL processes to facilitate business deliverables, with a focus on future development work to drive actionable insights and business impact. Write high-quality, effective code that can be easily scaled across platforms using Python/R programming. Deepen the understanding of the business to contribute to other analytics initiatives, including predictive modeling, and collaborate on data-driven projects with cross-functional teams. Learn in-house software platforms used for actuarial reserving and manage their use in the processes, contributing to the enhancement of analytical capabilities. Manage quarterly/monthly/yearly financial data for MI reporting and collaborate with stakeholders to provide valuable insights and support decision-making. Partner with global technology teams to deliver changes to our data and processes to meet strategic goals, actively participating in transformative projects including move to the cloud. Demonstrate proactive communication with Business users, Development, Technology, Production Support, and Delivery Teams, and Senior Management to drive collaborative problem-solving and knowledge sharing. Develop and maintain process documentation to ensure transparency and knowledge transfer within the team and across stakeholders. Support ad-hoc activities to address emerging business needs and contribute to the agility of the team.You will report to Lead, AFR. SHARE your talent Were looking for someone who has these abilities and skills: Required Skills and Abilities: University Graduate (B.E/B.Tech/CS/IT/BSc). Relevant years of work experience, preferably in the insurance industry, financial services, or consultancy. Good knowledge of Statistics and mathematical functions. Good hands-on computer application skills, specifically python programming, SQL, Power BI & MS Excel. In-depth knowledge of Python software development, including frameworks, tools, and systems (NumPy, Pandas, Django, SciPy, PyTorch, etc.). Desired Skills and Abilities: Good to have knowledge of R programming (dplyr) and QlikView. Excellent analytical, research, and problem-solving skills. Understanding of cloud principles with good exposure to Microsoft Azure stack (Databricks, SQL DB etc.). Understanding of AI fundamentals including exposure to LLMs.
Posted 3 weeks ago
2.0 - 5.0 years
4 - 7 Lacs
Mumbai
Work from Office
About the Role: We are seeking a highly motivated and skilled Data Scientist with 25 years of industry experience in Machine Learning, Deep Learning, Natural Language Processing (NLP), and Generative AI (GenAI). The ideal candidate will have a strong foundation in Python programming and a passion for solving complex business problems using data. You will work on designing, developing, and deploying scalable AI solutions for real-world applications across domains. This is a hands-on role requiring technical expertise, experimentation, and collaboration across cross-functional teams. Key Responsibilities: Perform Exploratory Data Analysis (EDA) to uncover patterns, insights, and anomalies. Design, build, and optimize ML and DL models using Python for diverse business use cases. Develop and apply NLP techniques for text analysis and language understanding. Build, fine-tune, and deploy Generative AI solutions (LLMs, VLMs, RAG) tailored to enterprise needs. Create and maintain end-to-end ML pipelines , including data preprocessing, feature engineering, training, evaluation, and deployment. Manage and monitor models in production using MLOps workflows (CI/CD, model tracking, versioning, etc.). Stay updated with recent developments in AI/ML and continuously explore new tools and frameworks. Required Skills & Qualifications: Strong programming proficiency in Python is essential. Experience with Python libraries such as pandas, NumPy, scikit-learn, NLTK, Transformers (Hugging Face), TensorFlow/PyTorch . Solid understanding of machine learning algorithms, statistical modeling, and deep learning concepts. Hands-on experience in Natural Language Processing and Generative AI techniques (e.g., LLMs, RAG, VLMs, Agents). Experience with EDA, data preprocessing , and feature engineering. Familiarity with MLOps tools like MLflow, DVC or similar. Experience with Docker and containerized deployment of ML models. Working knowledge of databases both SQL and NoSQL . Experience deploying ML models in production environments . Familiarity with version control tools (e.g. Git) and collaborative workflows. Exposure to cloud platforms like AWS, GCP, or Azure is a plus.
Posted 3 weeks ago
2.0 - 6.0 years
4 - 9 Lacs
Chennai
Work from Office
Key Responsibilities: Strong understanding of Python frameworks (Flask) and libraries (e.g., Pandas, NumPy, Scikit-learn). Proficiency in SQL and experience with databases like PostgreSQL/ Oracle (any other SQL databases) Experience with cloud services (AWS, Azure, or GCP) and containerization (Docker, Kubernetes) Demonstrated problem solving /Algorithms / Design patterns / Hackathon etc Good to have Machine learning, AI, Data science exposure (Scikit-learn, TensorFlow, PyTorch) Product development experience is preferred Qualifications: 3+ years of professional experience in Python backend development. Experience with RESTful APIs and microservices architecture. Experience with DevOps practices and tools (e.g., CI/CD pipelines) Experience with Agile development methodologies (e.g., Scrum, Kanban) Bachelor's degree in Computer Science, Computer Engineering, or a related field. Must have good communication skills. Only candidates available for immediate joining or with a notice period of 15 days or less will be considered. Intrested candidate can sahre tere resume to logashrik@newtglobcorp.com
Posted 3 weeks ago
3.0 - 6.0 years
15 - 20 Lacs
Pune
Hybrid
Job Title: Python Developer Location: Pune (Hybrid - 3 Days a week onsite) Required Skills: Strong Python programming skills Experience with REST API development and integration Hands-on experience with Splunk (search, dashboards, alerts) Knowledge of Windows plugins and scripting
Posted 3 weeks ago
8.0 - 13.0 years
15 - 30 Lacs
Hyderabad
Hybrid
Role: Python Developer Hyderabad- Hybrid Job Description: We are seeking a skilled Python Developer to join our team and contribute to designing, developing, and maintaining high-performance applications. The ideal candidate should have strong experience in Python, along with expertise in web frameworks like Flask or Django , database management, and API development. Required Skills & Qualifications: Strong proficiency in Python (3.x) and knowledge of OOP principles. Experience with Flask or Django for web application development. Proficiency in working with databases ( SQL and NoSQL ). Hands-on experience with RESTful API development and integration. Familiarity with version control tools like Git, GitHub, or GitLab . Experience with cloud platforms ( AWS, Azure, or Google Cloud ) is a plus. Knowledge of containerization tools like Docker and Kubernetes is an advantage. Strong debugging, testing, and problem-solving skills. Experience with CI/CD pipelines is a plus. Ability to work independently and collaboratively in an agile environment. Preferred Qualifications: Bachelors or Master’s degree in Computer Science, Engineering, or related field. Experience with asynchronous programming (Celery, RabbitMQ) is a plus. Knowledge of data processing, analytics, or AI/ML frameworks is beneficial.
Posted 3 weeks ago
2.0 - 7.0 years
2 - 6 Lacs
Gurugram
Work from Office
Pandas/Numpy/FastAPI/Flask/SQL/xLwriter/Sklearn/Pyodbc/Pymongo/Selenium Exposure to Web Deployment using Flask, Docker over REST API Excel automation using Python and front-end technologies like HTML, CSS Good understanding in SQL Programming/Query
Posted 3 weeks ago
10.0 - 15.0 years
25 - 37 Lacs
Pune
Work from Office
Looking for Java Fullstack Developer Required Candidate profile Looking for our client
Posted 3 weeks ago
10.0 - 15.0 years
25 - 37 Lacs
Pune
Work from Office
Looking for Java Fullstack Developer Required Candidate profile Looking for our client
Posted 3 weeks ago
3.0 - 5.0 years
15 - 20 Lacs
Hyderabad
Hybrid
Job Requirements : B.Tech/BE/M.Tech/ME in Computer Science or equivalent from a reputed college/university. We are looking for 3 - 5 years of experience engineer for this position. Should have hands on experience on Frontend Web application development Should have hands on experience on technologies like JavaScript, Vue JS/ Angular/ React & Python Should have hands on experience on working with REST APIs Should be comfortable working with SQL Server, Postgres and any Experience in C# is considered a plus but not mandatory Has basic understanding and/or experience of business process management and Agile Methodology Strong analytical and problem-solving skills Deliver accurate and reliable solutions Ability to work independently and take initiative to be able to decide on our own and complete tasks on time. Excellent interpersonal and communication skills Communicate ideas and feedback clearly Ability to work under pressure and multi-task Demonstrate time management on assigned projects
Posted 3 weeks ago
3.0 - 7.0 years
1 - 2 Lacs
Mumbai, Thane, Navi Mumbai
Work from Office
Key Responsibilities: Develop and maintain automated web scraping scripts using Python libraries such as BeautifulSoup, Scrapy, and Selenium. Optimize scraping pipelines for performance, scalability, and resource efficiency. Handle dynamic websites, CAPTCHA-solving, and implement IP rotation techniques for uninterrupted scraping. Process and clean raw data, ensuring accuracy and integrity in extracted datasets. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. Leverage APIs when web scraping is not feasible, managing authentication and request optimization. Document processes, pipelines, and troubleshooting steps for maintainable and reusable scraping solutions. Ensure compliance with legal and ethical web scraping practices, implementing security safeguards. Requirements: Education : Bachelors degree in Computer Science, Engineering, or a related field. Experience : 2+ years of Python development experience, with at least 1 year focused on web scraping. Technical Skills : Proficiency in Python and libraries like BeautifulSoup, Scrapy, and Selenium. Experience with regular expressions (Regex) for data parsing. Strong knowledge of HTTP protocols, cookies, headers, and user-agent rotation. Familiarity with databases (SQL and NoSQL) for storing scraped data. Hands-on experience with data manipulation libraries such as pandas and NumPy. Experience working with APIs and managing third-party integrations. Familiarity with version control systems like Git. Bonus Skills : Knowledge of containerization tools like Docker. Experience with distributed scraping solutions and task queues (e.g., Celery, RabbitMQ). Basic understanding of data visualization tools. Non-Technical Skills : Strong analytical and problem-solving skills. Excellent communication and documentation skills. Ability to work independently and collaboratively in a team environment. CANDIDATES AVAILABLE FOR FACE-TO-FACE INTERVIEWS ARE PREFERRED.
Posted 3 weeks ago
5.0 - 9.0 years
25 - 35 Lacs
Hyderabad
Work from Office
Pandas: Proficient in data manipulation and analysis using data frame and series. Experience with data cleaning, merging, reshaping and aggregation. Knowledge of handling missing data, filtering, sorting and applying functions to datasets REST API: Experience in making get and post request using libraries like requests. Ability to fetch, process and handle JSON responses effectively. Knowledge of error handling, authentication(e.g. auth), and API rate limitingData structures in python: In depth knowledge of Python's bult in data structures: lists, tuples, dictionaries, sets and arrays. Proficiency in implementing custom data structures using classes, methods and functions.Regular Expression: Proficient in pattern matching, text searching and data validation. Experience with reading from and writing to files(text, CSV, JSON
Posted 3 weeks ago
8.0 - 13.0 years
25 - 30 Lacs
Pune
Work from Office
About The Role We are hiring a results-oriented Senior Machine Learning Engineer to join our growing team in Pune on a hybrid schedule Reporting to the Director of Machine Learning, you will partner with Product and Engineering teams to both solve problems and identify new opportunities for the business The ideal candidate will apply quantitative analysis, modeling, and data mining to help drive informed product decisions for PubMatic + get things done, What You'll Do Perform deep dive analysis to understand and optimize the key product KPIs, Apply statistics, modeling, and machine learning to improve the efficiency of systems and relevance algorithms across our business application products, Conduct data analysis to make product recommendations and design A/B experiments, Partner with Product and Engineering teams to solve problems and identify trends and opportunities, Collaborate with cross-functional stakeholders to understand their business needs, formulate and complete end-to-end analysis that includes data gathering, analysis, ongoing scaled deliverables and presentations, We'd Love for You to Have Five plus years of hands-on experience designing Machine Learning models to solve business problems with statistical packages, such as R, MATLAB, Python (NumPy, Scikit-learn + Pandas) or MLlib, Experience with articulating product questions and using statistics to arrive at an answer, Experience with scripting in SQL extracting large data sets and designing ETL flows, Work experience in an inter-disciplinary / cross-functional field, Deep interest and aptitude in data, metrics, analysis, and trends, and applied knowledge of measurement, statistics, and program evaluation, Distinctive problem-solving skills and impeccable business judgment, Capable of translating analysis results into business recommendations, Should have a bachelors degree in engineering (CS / IT) or equivalent degree from a well-known institute/university, Additional Information Return to Office: PubMatic employees throughout the globe have returned to our offices via a hybrid work schedule (3 days in office and 2 days working remotely) that is intended to maximize collaboration, innovation, and productivity among teams and across functions, Benefits: Our benefits package includes the best of what leading organizations provide, such as paternity/maternity leave, healthcare insurance, broadband reimbursement As well, when were back in the office, we all benefit from a kitchen loaded with healthy snacks and drinks and catered lunches and much more!, Diversity and Inclusion: PubMatic is proud to be an equal opportunity employer; we dont just value diversity, we promote and celebrate it We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status About PubMatic PubMatic is one of the worlds leading scaled digital advertising platforms, offering more transparent advertising solutions to publishers, media buyers, commerce companies and data owners, allowing them to harness the power and potential of the open internet to drive better business outcomes, Founded in 2006 with the vision that data-driven decisioning would be the future of digital advertising, we enable content creators to run a more profitable advertising business, which in turn allows them to invest back into the multi-screen and multi-format content that consumers demand,
Posted 3 weeks ago
5.0 - 9.0 years
7 - 11 Lacs
Pune
Work from Office
About The Role We are hiring a results-oriented Principal Machine Learning Engineer to join our growing team in Pune on a hybrid schedule Reporting to the Director of Machine Learning, you will partner with Product and Engineering teams to both solve problems and identify new opportunities for the business The ideal candidate will apply quantitative analysis, modeling, and data mining to help drive informed product decisions for PubMatic + get things done, What You'll Do Perform deep dive analysis to understand and optimize the key product KPIs, Apply statistics, modeling, and machine learning to improve the efficiency of systems and relevance algorithms across our business application products, Conduct data analysis to make product recommendations and design A/B experiments, Partner with Product and Engineering teams to solve problems and identify trends and opportunities, Collaborate with cross-functional stakeholders to understand their business needs, formulate and complete end-to-end analysis that includes data gathering, analysis, ongoing scaled deliverables, and presentations, We'd Love for You to Have Seven plus years of hands-on experience designing Machine Learning models to solve business problems with statistical packages, such as R, MATLAB, Python (NumPy, Scikit-learn + Pandas) or MLlib, Proven ability to inspire, mentor, and develop team members to deliver value consistently, Experience with articulating product questions and using statistics to arrive at an answer, Experience with scripting in SQL extracting large data sets and designing ETL flows, Work experience in an inter-disciplinary / cross-functional field, Deep interest and aptitude in data, metrics, analysis, and trends, and applied knowledge of measurement, statistics, and program evaluation, Distinctive problem-solving skills and impeccable business judgment, Capable of translating analysis results into business recommendations, Should have a bachelors degree in engineering (CS / IT) or equivalent degree from a well-known institute/university, Additional Information Return to Office: PubMatic employees throughout the globe have returned to our offices via a hybrid work schedule (3 days in office and 2 days working remotely) that is intended to maximize collaboration, innovation, and productivity among teams and across functions, Benefits: Our benefits package includes the best of what leading organizations provide, such as paternity/maternity leave, healthcare insurance, broadband reimbursement As well, when were back in the office, we all benefit from a kitchen loaded with healthy snacks and drinks and catered lunches and much more! Diversity and Inclusion: PubMatic is proud to be an equal opportunity employer; we dont just value diversity, we promote and celebrate it We do not discriminate on the basis of race, religion, color, national origin, gender, sexual orientation, age, marital status, veteran status, or disability status About PubMatic PubMatic is one of the worlds leading scaled digital advertising platforms, offering more transparent advertising solutions to publishers, media buyers, commerce companies and data owners, allowing them to harness the power and potential of the open internet to drive better business outcomes, Founded in 2006 with the vision that data-driven decisioning would be the future of digital advertising, we enable content creators to run a more profitable advertising business, which in turn allows them to invest back into the multi-screen and multi-format content that consumers demand,
Posted 3 weeks ago
5.0 - 9.0 years
5 - 9 Lacs
Bengaluru
Work from Office
Data engineer, strong in Python, Solution architech At least 8+ years experience, ideally within a Data Engineer role. Demonstrated experience working with large and complex data sets as well as experience analyzing volumes of data. Excellent experience working Python, Pandas, Flask/Fast/Django API, Middleware, Scheduler, SQL, Databases. Prior experience with Python frameworks such as Django or Flask and a strong knowledge of SQL (queries, joins, etc.). Good to have some experience in AWS/Azure. Capability of developing highly-scalable RESTful APIs. Excellent team player and can work well in an individual capacity as well. Detail-oriented and possess strong analytical skills. Pays strong attention to detail and deliver work that is of a high standard. Highly goal-driven and work well in fast-paced environments. Python, RestAPI, Redis Cache, SQl ostgres/MariaDB/Clickhouse, Kubernetes, Linux/Window scheduler, Shell Script 1. DataOPS - Python Core Advanced - Development and Data Pipelining - Data Structures, Pandas, Numpy, sklearn, concurrency, design patterns 2. DevOPS - App Deployment using CI/CD tools like Jenkins, Jfrog, Docker, Kubernetes, Openshift Container Platform 3. Microservices & REST APIs - FastAPI, Flask, Tornado 4. Cloud how apps are build and deployed using cloud 5. Databases & SQL Postgres, Clickhouse, MongoDB 6. Caching & Queuing - Pub/Sub(RabbitMQ), Redis 7. Operating system - Linux and windows 8. Monitoring and Logging Splunk
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The job market for pandas professionals in India is on the rise as more companies are recognizing the importance of data analysis and manipulation in making informed business decisions. Pandas, a popular Python library for data manipulation and analysis, is a valuable skill sought after by many organizations across various industries in India.
Here are 5 major cities in India actively hiring for pandas roles: 1. Bangalore 2. Mumbai 3. Delhi 4. Hyderabad 5. Pune
The average salary range for pandas professionals in India varies based on experience levels. Entry-level positions can expect a salary ranging from ₹4-6 lakhs per annum, while experienced professionals can earn upwards of ₹12-18 lakhs per annum.
Career progression in the pandas domain typically involves moving from roles such as Junior Data Analyst or Data Scientist to Senior Data Analyst, Data Scientist, and eventually to roles like Tech Lead or Data Science Manager.
In addition to pandas, professionals in this field are often expected to have knowledge or experience in the following areas: - Python programming - Data visualization tools like Matplotlib or Seaborn - Statistical analysis - Machine learning algorithms
Here are 25 interview questions for pandas roles: - What is pandas in Python? (basic) - Explain the difference between Series and DataFrame in pandas. (basic) - How do you handle missing data in pandas? (basic) - What are the different ways to create a DataFrame in pandas? (medium) - Explain groupby() in pandas with an example. (medium) - What is the purpose of pivot_table() in pandas? (medium) - How do you merge two DataFrames in pandas? (medium) - What is the significance of the inplace parameter in pandas functions? (medium) - What are the advantages of using pandas over Excel for data analysis? (advanced) - Explain the apply() function in pandas with an example. (advanced) - How do you optimize performance in pandas operations for large datasets? (advanced) - What is method chaining in pandas? (advanced) - Explain the working of the cut() function in pandas. (medium) - How do you handle duplicate values in a DataFrame using pandas? (medium) - What is the purpose of the nunique() function in pandas? (medium) - How can you handle time series data in pandas? (advanced) - Explain the concept of multi-indexing in pandas. (advanced) - How do you filter rows in a DataFrame based on a condition in pandas? (medium) - What is the role of the read_csv() function in pandas? (basic) - How can you export a DataFrame to a CSV file using pandas? (basic) - What is the purpose of the describe() function in pandas? (basic) - How do you handle categorical data in pandas? (medium) - Explain the role of the loc and iloc functions in pandas. (medium) - How do you perform text data analysis using pandas? (advanced) - What is the significance of the to_datetime() function in pandas? (medium)
As you explore pandas jobs in India, remember to enhance your skills, stay updated with industry trends, and practice answering interview questions to increase your chances of securing a rewarding career in data analysis. Best of luck on your job search journey!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.