Home
Jobs

1517 Data Processing Jobs - Page 50

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 4.0 years

4 - 7 Lacs

Gurugram

Work from Office

Naukri logo

Job Summary: Are you inquisitive, creative, hardworking, and driven? If you are, we are looking for a Senior Research Executive to be part our team passionate about bringing compelling levels of insights to our partners, and ensuring the long-term revenue growth for Kantar Worldpanel India. This is an exciting role which will provide you with a blended development focus from on-the-job experience, exposure and exchange of knowledge with others, and more formal education as you look to take the next step in your career WHAT YOUD DO: - Researchers are expected to dig deep into the data - using the enquiry software - to extract incisive and business-oriented analysis - Prepare and deliver presentations, build category, cross-category and industry knowledge from project to project making sure previous learning is built upon and shared fully Initiate and build client relationship, handle daily client requests, and anticipate client demands to proactively find solutions. - Coordinate relevant training activities for clients to provide them with deep understanding of consumer behaviors. - Assist line managers in identifying further opportunities within client portfolio to generate business revenue streams - Ensure regular deliverables are dispatched on time and accurately by line reports or support departments - Responsibilities will also include: Writing Research Proposals Developing Questionnaires Interacting with the Field Briefing Analytics about the data processing requirements. WHAT YOUD BRING: 2-4 years working experience in Research, Marketing or Advertising - ideally in FMCG industry A curious mentality and unafraid to have a point of view. Strong numerical and analytical skills which can be applied in a commercial context with a keen interest in analysing data to provide recommendations and insight. Professional and courteous in manner, dedicated to providing a good level of service and motivated to find solutions for problems. Articulate and credible written and verbal communication skills, including good presentation skills. Able to cope well with time pressure and make decisions under complex and fast paced circumstances Be able to work optimally in a team environment Well organized with strong level of time management and the ability to manage and effectively plan workload. Business minded and determined with passion to learn, contribute and to develop. Good command of computer skills.

Posted 1 month ago

Apply

3.0 - 6.0 years

14 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

Job Details: : WCD IDG Tools Team is looking for a highly skilled and experienced Full Stack Developer to join our team. The ideal candidate will have strong expertise in both backend and frontend technologies, with a primary focus on C#, C++, Angular, and Python. You will be responsible for designing, developing, and maintaining complex web applications and services. Key Responsibilities: Design, develop, and maintain scalable web applications and services. Collaborate with cross-functional teams to define, design, and ship new features. Write clean, efficient, and well-documented code in C#, C++, Python, and Angular. Develop and maintain APIs and microservices. Participate in architectural and technical decision-making. Review code, mentor junior developers, and promote best practices. Troubleshoot, debug, and upgrade existing systems. Ensure security, performance, and responsiveness of applications. Qualifications: Required Qualifications: 5+ years of professional software development experience. Proven expertise in C# (ASP.NET, .NET Core). Strong experience with C++ (preferably in high-performance or systems-level programming). Proficient in Python for backend services, scripting, or data processing. Solid experience with Angular for frontend development. Experience in RESTful API development and integration. Familiarity with database technologies (e.g., SQL Server, PostgreSQL, MongoDB). Understanding of CI/CD pipelines and DevOps practices. Strong understanding of software design patterns, data structures, and algorithms. Excellent problem-solving and communication skills. Experience with Agile/Scrum methodologies. Soft Skills: Strong problem-solving and debugging capabilities. Fast learner with a proactive, self-driven mindset. Excellent communication and documentation skills. Ability to work both independently and within a collaborative team. Job Type: Experienced Hire Shift: Shift 1 (India) Primary Location: India, Bangalore Additional Locations: Business group: The Client Computing Group (CCG) is responsible for driving business strategy and product development for Intel's PC products and platforms, spanning form factors such as notebooks, desktops, 2 in 1s, all in ones. Working with our partners across the industry, we intend to deliver purposeful computing experiences that unlock people's potential - allowing each person use our products to focus, create and connect in ways that matter most to them. As the largest business unit at Intel, CCG is investing more heavily in the PC, ramping its capabilities even more aggressively, and designing the PC experience even more deliberately, including delivering a predictable cadence of leadership products. As a result, we are able to fuel innovation across Intel, providing an important source of IP and scale, as well as help the company deliver on its purpose of enriching the lives of every person on earth. Posting Statement: All qualified applicants will receive consideration for employment without regard to race, color, religion, religious creed, sex, national origin, ancestry, age, physical or mental disability, medical condition, genetic information, military and veteran status, marital status, pregnancy, gender, gender expression, gender identity, sexual orientation, or any other characteristic protected by local law, regulation, or ordinance. Position of Trust N/A Work Model for this Role This role will require an on-site presence. *

Posted 1 month ago

Apply

4.0 - 6.0 years

6 - 8 Lacs

Pune

Work from Office

Naukri logo

Skills: Django Django Custom UI Python Rest Framework ORM HTML& CSS Chat-GPT Prompting GIT Knowledge SQL Postgres Industry standards and best practices JSON Handling Data Processing Working in Team Environment WhatsApp META API Experience is a PLUS

Posted 1 month ago

Apply

3.0 - 5.0 years

9 - 11 Lacs

Pune

Work from Office

Naukri logo

Responsibilities: Design, develop, and maintain robust and scalable backend systems using Django and Python. Develop RESTful APIs using Django REST Framework to power our frontend applications. Implement efficient database solutions using PostgreSQL and Django ORM. Write clean, well-documented, and maintainable code. Collaborate with the frontend team to ensure seamless integration between frontend and backend components. Optimize application performance and scalability. Implement security best practices to protect our applications and user data. Stay up-to-date with the latest technologies and industry trends. Contribute to the development of new features and improvements. Skills: Django Django Custom UI Python Rest Framework ORM HTML&CSS Chat-GPT Prompting GIT Knowledge SQL Postgres Industry standards and best practices JSON Handling Data Processing Working in Team Environment WhatsApp META API Experience is a PLUS.

Posted 1 month ago

Apply

4.0 - 9.0 years

10 - 20 Lacs

Pune

Work from Office

Naukri logo

Skills: 1] Hands-on experience in LLM/ML based application development 2] Productionizing LLM/ML apps Data Handling/processing/engineering Dataset creation/curation 3] Understanding of different LLM performance metrics, fine-tuning, prompt engineering 4] Image/Video Processing 5] Generative AI, OpenAI, Claude Knowledge. Good in Prompt Engineering, autogen or similar agentic framework knowledge Skills : - LLM/ML ,application development, Data Handling,Data processing,Data engineering,Dataset creation, LLM performance metrics, fine-tuning, prompt engineering, Image/Video Processing, Generative AI, OpenAI, Claude, Prompt Engineering, autogen, agentic framework

Posted 1 month ago

Apply

5.0 - 10.0 years

12 - 15 Lacs

Pune

Work from Office

Naukri logo

Design, develop, and implement data solutions using Azure Data Stack components . Write and optimize advanced SQL queries for data extraction, transformation, and analysis. Develop data processing workflows and ETL processes using Python and PySpark.

Posted 1 month ago

Apply

4.0 - 6.0 years

6 - 10 Lacs

Pune

Work from Office

Naukri logo

Skills : 1] Hands-on experience in LLM/ML based application development 2] Productionizing LLM/ML apps Data Handling/processing/engineering Dataset creation/curation 3] Understanding of different LLM performance metrics, fine-tuning, prompt engineering 4] Image/Video Processing 5] Generative AI, OpenAI, Claude Knowledge. Good in Prompt Engineering, autogen or similar agentic framework knowledge

Posted 1 month ago

Apply

9.0 - 10.0 years

14 - 15 Lacs

Hyderabad

Work from Office

Naukri logo

Some careers shine brighter than others. If you re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Consultant Specialist, Analytic Foundations Enabler stream In this role, you will: Leading and supporting your technology teams to design, develop and manage data engineering pipelines, CI / CD pipelines and infrastructure. Acting as a Infra / DevOps SME on the design and implementation of technology solutions around transformation of systems involved in risk analytics function on a tactical and strategic basis covering number of significant Regulatory and business transformation initiatives in the bank. Closely working with rest of the technology team members, supporting IT teams and architects to drive improvements in product. Leading and supporting day to day interactions with other IT teams / central teams for DevOps and infrastructure support. Growing the technical expertise of engineering community. Helping with designing, maintaining, and improving all aspects of the software delivery lifecycle. Enforcing process discipline and improvements in areas of expertise, such as disciplined agile software delivery, production support processes, or continuous DevOps pipelines development Requirements To be successful in this role, you should meet the following requirements: Over 5 years of experience in platform engineering, SRE responsibilities, and managing distributed/big data infrastructures. Strong hands-on experience with the Hadoop ecosystem, big data pipelines, and Delta Lake. Proven expertise in managing and optimizing Hadoop clusters (Cloudera distribution), including HDFS, YARN, and Spark, as well as working with big data processing tools and frameworks. Solid experience with HDFS, networking, Linux, and DevSecOps tools, including Docker, Kubernetes, and Jenkins. Proficiency in using Docker for containerization and Kubernetes for orchestration to manage large-scale, distributed applications. Hands-on experience in designing and managing large-scale technology projects with exposure to REST API standards and implementation. Expertise in setting up and managing monitoring and logging solutions using tools like Prometheus, Grafana, or Splunk to ensure system reliability and performance. Experience working with multiple IT teams and support teams in various geographical locations globally. Proficient coding skills in Spark (PySpark) and Python, with around 3 years of hands-on experience preferred. Strong knowledge of Infrastructure as Code (IaC) tools such as Terraform, Ansible for automating infrastructure provisioning and management. Experience with GCP or any other cloud platform and data engineering product offering is preferred. Familiarity with agile development methodologies Strong problem-solving skills and attention to detail. Excellent communication and team collaboration skills.

Posted 1 month ago

Apply

3.0 - 6.0 years

15 - 25 Lacs

Chennai

Work from Office

Naukri logo

Data Engineer Skills and Qualifications SQL - Mandatory Strong knowledge of AWS services (e.g., S3, Glue, Redshift, Lambda ). - Mandatory Experience working with DBT – Nice to have Proficiency in PySpark or Python for big data processing. - Mandatory Experience with orchestration tools like Apache Airflow and AWS CodePipeline . - Mandatory Job Summary We are seeking a skilled Developer with 3 to 6 years of experience to join our team. The ideal candidate will have expertise in AWS DevOps Python and SQL. This role involves working in a hybrid model with day shifts and no travel requirements. The candidate will contribute to the companys purpose by developing and maintaining high-quality software solutions. Responsibilities Develop and maintain software applications using AWS DevOps Python and SQL. Collaborate with cross-functional teams to design and implement new features. Ensure the scalability and reliability of applications through effective coding practices. Monitor and optimize application performance to meet user needs. Provide technical support and troubleshooting for software issues. Implement security best practices to protect data and applications. Participate in code reviews to maintain code quality and consistency. Create and maintain documentation for software applications and processes. Stay updated with the latest industry trends and technologies to enhance skills. Work in a hybrid model balancing remote and in-office work as needed. Communicate effectively with team members and stakeholders to ensure project success. Contribute to the continuous improvement of development processes and methodologies. Ensure timely delivery of projects while maintaining high-quality standards. Qualifications Possess a strong understanding of AWS DevOps including experience with deployment and management of applications on AWS. Demonstrate proficiency in Python programming with the ability to write clean and efficient code. Have experience with SQL for database management and querying. Show excellent problem-solving skills and attention to detail. Exhibit strong communication and collaboration skills. Be adaptable to a hybrid work model and able to manage time effectively.

Posted 1 month ago

Apply

1.0 - 4.0 years

0 - 1 Lacs

Mumbai, Mumbai Suburban, Mumbai (All Areas)

Work from Office

Naukri logo

Title- Data entry operator Location : Palghar (factory) Salary 15 k -18k Shift - 9.30 am - 6.30pm Gender : Male Friday will be weekly off Job Description - * Accurately enter data into company databases, spreadsheets, or systems * Review data for errors, missing information, or inconsistencies and correct them * Communicate with team members to clarify data requirements or resolve discrepancies * Perform regular backups and ensure data is securely stored * Maintain an organized filing system for both electronic and paper records Requirement * HSC/Graduate with 1-2yrs of exp in Data Entry Operator or similar role * Proficient in Microsoft Office (especially Excel and Word) * Strong attention to detail and accuracy * Basic understanding of administrative processes Benefits * PF * ESIC * Paid Leaves * Leave encashment * Yearly bonus * Commuter assistance You can share your resume on - charvi.a@ipsgroup.co.in

Posted 1 month ago

Apply

0.0 years

1 - 5 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Call Handling, Messaging: Answer inbound calls from job seekers, listen to their needs, and qualify them. Provide information on WhatsApp. Pass lead to recruitment team for qualified leads - in a professional and timely manner. Work From Home Required Candidate profile Immediate Joiner Work From Home Candidate should be from Hyderabad, New Delhi, Mumbai, Pune, Bangalore,

Posted 1 month ago

Apply

3.0 - 6.0 years

12 - 22 Lacs

Gurugram

Work from Office

Naukri logo

Overview 170+ Years Strong. Industry Leader. Global Impact. At Pinkerton, the mission is to protect our clients. To do this, we provide enterprise risk management services and programs specifically designed for each client. Pinkerton employees are one of our most important assets and critical to the delivery of world-class solutions. Bonded together, we share a commitment to integrity, vigilance, and excellence. Pinkerton is an inclusive employer who seeks candidates with diverse backgrounds, experiences, and perspectives to join our family of industry subject matter experts. The Data Engineer will be part of a high-performing and international team with the goal to expand Data & Analytics solutions for our CRM application which is live in all Securitas countries. Together with the dedicated Frontend– & BI Developer you will be responsible for managing and maintaining the Databricks based BI Platform including the processes from data model changes, implementation and development of pipelines are part of the daily focus, but ETL will get most of your attention. Continuous development to do better will need the ability to think bigger and work closely with the whole team. The Data Engineer (ETL Specialist) will collaborate with the Frontend– & BI Developer to align on possibilities to improve the BI Platform deliverables specifically for the CEP organization. Cooperation with other departments such as integrations or specific IT/IS projects and business specialists is part of the job. The expectation is to always take data privacy into consideration when talking about moving data or sharing data with others. For that purpose, there is a need to develop the security layer as agreed with the legal department. Responsibilities Represent Pinkerton’s core values of integrity, vigilance, and excellence. Maintain & Develop the Databricks workspace used to host the BI CEP solution Active in advising needed changes the data model to accommodate new BI requirements Develop and implement new ETL scripts and improve the current ones Ownership on resolving the incoming tickets for both incidents and requests Plan activities to stay close to the Frontend- & BI Developer to foresee coming changes to the backend Through working with different team members improve the teamwork using the DevOps tool to keep track of the status of the deliverable from start to end Ensure understanding and visible implementation of the company’s core values of Integrity, Vigilance and Helpfulness. Knowledge about skills and experience available and required in your area today and tomorrow to drive liaison with other departments if needed. All other duties, as assigned. Qualifications At least 3+ years of experience in Data Engineering Understanding of designing and implementing data processing architectures in Azure environments Experience with different SSAS - modelling techniques (preferable Azure, databricks - Microsoft related) Understanding of data management and – treatment to secure data governance & security (Platform management and administration) An analytical mindset with clear communication and problem-solving skills Experience in working with SCRUM set up Fluent in English both spoken and written Bonus: knowledge of additional language(s) Ability to communicate, present and influence credibly at all levels both internally and externally Business Acumen & Commercial Awareness Working Conditions: With or without reasonable accommodation, requires the physical and mental capacity to effectively perform all essential functions; Regular computer usage. Occasional reaching and lifting of small objects and operating office equipment. Frequent sitting, standing, and/or walking. Travel, as required. Pinkerton is an equal opportunity employer to all applicants and positions without regard to race/ethnicity, color, national origin, ancestry, sex/gender, gender identity/expression, sexual orientation, marital/prenatal status, pregnancy/childbirth or related conditions, religion, creed, age, disability, genetic information, veteran status, or any protected status by local, state, federal or country-specific law.

Posted 1 month ago

Apply

1.0 - 2.0 years

2 - 5 Lacs

Mumbai

Work from Office

Naukri logo

PositionCustomer Interaction Executive (International Process)LocationMumbaiCompany nameBDS Services Pvt Ltd.Websitewww.bdsserv.com Introduction BDS Services Pvt Ltd, a professional B2B database management company. HQ Located in Mumbai, India and having branch offices in London & Amsterdam. Founded in 2009, BDS is a reliable organization in providing back office administrative services such as Online and Offline data entry services, Data processing, Data conversion, Data Validation, List Buildup (Bespoke Data Creation), Web Research/Data Mining and Controlled Circulation Services. BDS has been committed to provide outsourcing solutions across a wide range of platforms and technologies at competitive prices, excellent customer support and quality of deliverable's. Our company has always been driven with the aim of forging long-term relationships with our clients by delivering services that are accurate, comprehensive, cost-effective and efficient. We have always aimed at implementing the positive aspects of our experience in our services.About The Role :- Gathering requisite data/information via Primary research (On Call).- Ability to use keywords wisely when doing internet research.Requirement :- Position - Customer Interaction Executives (Only Male Candidates)- Timing- Night Shift- 7pm to 4am /8pm to 5am- Typing Speed - Above 20- Skills - Good computer skills with good hand with Internet surfing & Excellent communication- Week off- Saturday, SundayNote - We would prefer people who will be willing to work in Night shifts. This job opening was posted long time back. It may not be active. Nor was it removed by the recruiter. Please use your discretion.

Posted 1 month ago

Apply

4.0 - 7.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Role Python Developer Location Bangalore Experience4 - 7 Yrs Employment Type Full Time, Working mode Regular Notice Period Immediate - 15 Days About the Role : We are seeking a skilled Python Developer to join our dynamic team and contribute to the development of innovative data-driven solutions. The ideal candidate will have a strong foundation in Python programming, data analysis, and data processing techniques. This role will involve working with various data sources, including Redis, MongoDB, SQL, and Linux, to extract, transform, and analyze data for valuable insights. You will also be responsible for developing and maintaining efficient and scalable data pipelines and visualizations using tools like matplotlib and seaborn. Additionally, experience with web development frameworks such as Flask, FastAPI, or Django, as well as microservices architecture, will be a significant advantage. Key Responsibilities : - Design, develop, and maintain efficient and scalable data pipelines to extract, transform, and load (ETL) data from various sources, including Redis, MongoDB, SQL, and Linux. - Conduct in-depth data analysis and processing using Python libraries and tools to uncover valuable insights and trends. - Develop and maintain data visualizations using matplotlib, seaborn, or other relevant tools to effectively communicate findings to stakeholders. - Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. - Develop and maintain web applications using Python frameworks like Flask, FastAPI, or Django, adhering to best practices and coding standards. - Design and implement microservices architecture to build scalable and modular systems. - Troubleshoot and resolve technical issues related to data pipelines, applications, and infrastructure. - Stay updated with the latest trends and technologies in the data engineering and Python development landscape. Required Skills and Qualifications : - Strong proficiency in Python programming, including object-oriented programming and functional programming concepts. - Experience with data analysis and processing libraries such as pandas, NumPy, and scikit-learn. - Familiarity with data storage and retrieval technologies, including Redis, MongoDB, SQL, and Linux. - Knowledge of data visualization tools like matplotlib and seaborn. - Experience with web development frameworks such as Flask, FastAPI, or Django. - Understanding of microservices architecture and principles. - Excellent problem-solving and analytical skills. - Ability to work independently and as part of a team. - Strong communication and interpersonal skills. Preferred Skills and Qualifications : - Experience with cloud platforms (AWS, GCP, Azure). - Knowledge of containerization technologies (Docker, Kubernetes). - Familiarity with data warehousing and data lake concepts. - Experience with machine learning and deep learning frameworks (TensorFlow, PyTorch). Apply Insights Follow-up Save this job for future reference Did you find something suspiciousReport Here! Hide This Job Click here to hide this job for you. You can also choose to hide all the jobs from the recruiter.

Posted 1 month ago

Apply

0.0 years

1 - 3 Lacs

Ahmedabad

Work from Office

Naukri logo

Ready to shape the future of work? At Genpact, we don't just adapt to change we drive it. AI and digital innovation are redefining industries and were leading the charge. Genpacts AI Gigafactory, our industry-first accelerator, is an example of how were scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team thats shaping the future, this is your moment Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook. Mega Virtual Drive for Customer Service roles -English+ Hindi Language on 23rd May 2025 (Wednesday) || Ahmedabad Location Date: 23-May-2025 MS Teams meeting ID: 483 053 616 279 3 MS Teams Passcode: nh6AE6TW Time: 12:00 PM - 1:00 PM Job Location: Ahmedabad (Work from office) Languages Known: Hindi+English Shifts: Flexible with any shift Responsibilities • Respond to customer queries and customer's concern • Provide support for data collection to enable Recovery of the account for end user. • Maintain a deep understanding of client process and policies • Reproduce customer issues and escalate product bugs • Provide excellent customer service to our customers • You should be responsible to exhibit capacity for critical thinking and analysis. • Responsible to showcase proven work ethic, with the ability to work well both independently and within the context of a larger collaborative environment Qualifications we seek in you Minimum qualifications • Graduate (Any Discipline except law) • Only Freshers are eligible • Fluency in English & Hindi language is mandatory Preferred qualifications • Effective probing skills and analyzing / understanding skills • Analytical skills with customer centric approach • Excellent proficiency with written English and with neutral English accent • You should be able to work on a flexible schedule (including weekend shift) **Note: Please keep your E-Aadhar card handy while appearing for interview. Why join Genpact? Be a transformation leader Work at the cutting edge of AI, automation, and digital innovation Make an impact Drive change for global enterprises and solve business challenges that matter Accelerate your career Get hands-on experience, mentorship, and continuous learning opportunities Work with the best Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Lets build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training.

Posted 1 month ago

Apply

3.0 - 7.0 years

8 - 12 Lacs

Chennai, Delhi / NCR, Bengaluru

Work from Office

Naukri logo

About the Role: We are looking for a hands-on AI/ML Developer with experience in Large Language Models (LLMs), Prompt Engineering, and AI model integration. The ideal candidate should have practical experience working with AI models, fine-tuning them, optimizing prompts, and integrating them into real-world applications. This role is perfect for someone who has already worked on AI-driven applications and wants to expand their expertise by researching and implementing new AI advancements. You will have the opportunity to experiment with different LLM architectures, improve AI model efficiency, and contribute to AI-driven solutions. Key Responsibilities: LLM Development & Implementation: Work hands-on with LLMs like GPT, LLaMA, Mistral, Claude, and Gemini. Fine-tune models using Hugging Face Transformers, PyTorch, or TensorFlow. Train, optimize, and deploy LLMs for tasks like text generation, summarization, and chatbots. Prompt Engineering & Optimization: Design, test, and optimize prompts for various AI tasks. Apply Few-shot, Zero-shot, Chain of Thought (CoT), and ReAct prompting. Improve AI model accuracy by iterating and refining prompt strategies. AI Model Integration & Deployment: Develop Python-based applications that interact with LLM APIs. Build and deploy AI models using FastAPI or Flask. Work with vector databases (FAISS, ChromaDB, Pinecone) for efficient retrieval. Deploy AI models on cloud platforms (AWS, Azure, GCP) using Docker and Kubernetes. Data Processing & NLP: Preprocess and clean large-scale text datasets. Work with text embeddings, named entity recognition (NER), and knowledge retrieval. Implement vector search techniques for AI-enhanced applications. AI Research & Experimentation: Stay up to date with the latest LLM advancements and AI research papers. Implement new AI techniques into existing workflows. Optimize models using quantization, vLLM, and low-rank adaptation (LoRA). Required Skills & Experience: Must-Have Hands-on Experience: Python programming with AI/ML frameworks (Hugging Face, PyTorch, TensorFlow). Hands-on experience working with LLMs and fine-tuning. Experience in prompt engineering and optimizing AI model outputs. Building APIs with FastAPI or Flask for AI model integration. Familiarity with vector databases and embedding models. Nice to Have (or Learn on the Job): Experience with LangChain, LlamaIndex, or Retrieval-Augmented Generation (RAG). Knowledge of quantization techniques (LoRA, GPTQ, vLLM, ONNX) for efficient model deployment. Experience working with knowledge graphs and reasoning-based AI. Background in MLOps for tracking and managing AI models. Location: Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad

Posted 1 month ago

Apply

0.0 - 1.0 years

1 - 3 Lacs

Pimpri-Chinchwad, Pune, Shirur

Hybrid

Naukri logo

As an Executive, you will be reporting to the Office Manager and assisting with various duties. This includes data management, data processing, CV formatting and CV updating. Proven experience as data entry operator/executive - preferred.

Posted 1 month ago

Apply

3.0 - 8.0 years

25 - 30 Lacs

Pune

Work from Office

Naukri logo

As a Lead Machine Learning Engineer of the Data Science & AI team, you will develop analytical products and solutions that sit atop vast datasets gathered by retail stores, restaurants, banks, and other consumer-focused companies. The challenge will be to create high-performance algorithms built on data sets measured in the billions of transactions that allow our users to derive insights from big data that in turn drive their businesses with a keen eye for data privacy and governance. Role: - Leads talent acquisition efforts and initiatives, facilitates training programs and conducts performance management for team of direct reports - Lead teams in the creation of portfolio robust ML solutions through effective use of Mastercard s global data assets and software platform - Build, productionize and maintain data driver AI/ML application and data processing workflows or pipelines - Consult with clients/ stakeholders to understand and translate their needs into a data analyses and/or solution, ensuring that their requirements are accurately captured and technically feasible - Guide others in comprehensive technical analyses and allocates work across teams to ensure the delivery of high quality and effective solutions - Liaise with internal stakeholders (eg, MA TECH, Data Strategy Management, AI governance) to identify and elaborate on opportunities as they relate to analytical solution development, feasibility, and other technical offerings - Lead development of presentations and technical documentation - Identify and recommend opportunities to standardize and automate efforts to ensure quality and enable scaling of ML products - Meet project deadlines for accountable deliverables and anticipates delays or foreseeable barriers to progress and escalates issues when necessary - Conduct due diligence quality assurance testing for prototypes and tools in stage and resolves reoccurring complex issues and bugs - Ensure that all machine learning processes, from data preparation to model deployment, are we'll-documented for internal use and compliance. - Mentor and guide junior developers All about you: - Expertise in Big Data Technologies: Proficiency in big data frameworks and tools such as Hadoop, Spark, Hive - Technical Proficiency: Strong programming skills in languages such as Python and SQL. Experience with data visualization tools (eg, Tableau, Power BI) and understanding of cloud computing services (AWS, Azure, GCP) related to data processing and storage is a plus. Experience with testing frameworks and test-driven development (TDD) practices - Advanced Analytical Skills: Strong applied knowledge and hands on experience in machine learning algorithms and deep learning frameworks. Familiarity with AI and machine learning platforms such as TensorFlow, PyTorch, or similar. Familiar with training and deploying models with large datasets including strategies for parallelizing and optimizing the training/deployment workflows. Experience in productionizing of GenAI products a plus. - Leadership and Strategic Planning: Proven experience in leading engineering teams, defining vision and strategy for data-driven initiatives, and driving projects from conception to implementation. Ability to mentor and develop talent within the team. - Problem-Solving Skills: Strong analytical and critical thinking abilities to solve complex problems, along with the creativity to find innovative solutions. - Communication and Collaboration: Excellent verbal and written communication skills, with the ability to explain complex analytical concepts to non-technical stakeholders. Experience in working cross-functionally with departments and flexibility to work as a member of a matrix based diverse and geographically distributed project teams. - Project Management Skills: Proficiency in managing multiple projects simultaneously, with a focus on delivering results within tight deadlines. - Responsible AI knowledge: Awareness of the principles and practices surrounding responsible AI, including fairness, transparency, accountability, and ethics in AI deployments. - Innovation and Continuous Learning: A mindset geared towards innovation, staying abreast of industry trends, emerging technologies in big data and analytics, and continuously seeking opportunities for personal and professional growth. Corporate Security Responsibility Abide by Mastercard s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard s guidelines.

Posted 1 month ago

Apply

4.0 - 6.0 years

16 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

The person will work on a variety of projects in a highly collaborative, fast-paced environment. The person will be responsible for software development activities of KPMG, India. Part of the development team, he/she will work on the full life cycle of the process, develop code and unit testing. He/she will work closely with Technical Architect, Business Analyst, user interaction designers, and other software engineers to develop new product offerings and improve existing ones. Additionally, the person will ensure that all development practices are in compliance with KPMG s best practices policies and procedures. This role requires quick ramp up on new technologies whenever required. bachelors or masters degree in Computer Science, Information Technology, or a related field. Data Management : Design, implement, and manage data solutions on the Microsoft Azure cloud platform. Data Integration : Develop and maintain data pipelines, ensuring efficient data extraction, transformation, and loading (ETL) processes using Azure Data Factory. Data Storage : Work with various Azure data storage solutions like Azure SQL Database, Azure Data Lake Storage, and Azure Cosmos DB. Big Data Processing : Utilize big data technologies such as Azure Databricks and Apache Spark to handle and analyze large datasets. Data Architecture : Design and optimize data models and architectures to meet business requirements. Performance Monitoring : Monitor and optimize the performance of data systems and pipelines. Collaboration : Collaborate with data scientists, analysts, and other stakeholders to support data-driven decision-making. Security and Compliance : Ensure data solutions comply with security and regulatory requirements. Technical Skills : Proficiency in Azure Data Factory, Azure Databricks, Azure SQL Database, and other Azure data tools. Analytical Skills : Strong analytical and problem-solving skills. Communication : Excellent communication and teamwork skills. Certifications : Relevant certifications such as Microsoft Certified: Azure Data Engineer Associate are a plus.

Posted 1 month ago

Apply

4.0 - 6.0 years

16 - 18 Lacs

Bengaluru

Work from Office

Naukri logo

The person will work on a variety of projects in a highly collaborative, fast-paced environment. The person will be responsible for software development activities of KPMG, India. Part of the development team, he/she will work on the full life cycle of the process, develop code and unit testing. He/she will work closely with Technical Architect, Business Analyst, user interaction designers, and other software engineers to develop new product offerings and improve existing ones. Additionally, the person will ensure that all development practices are in compliance with KPMG s best practices policies and procedures. This role requires quick ramp up on new technologies whenever required. bachelors or masters degree in Computer Science, Information Technology, or a related field. Data Management : Design, implement, and manage data solutions on the Microsoft Azure cloud platform. Data Integration : Develop and maintain data pipelines, ensuring efficient data extraction, transformation, and loading (ETL) processes using Azure Data Factory. Data Storage : Work with various Azure data storage solutions like Azure SQL Database, Azure Data Lake Storage, and Azure Cosmos DB. Big Data Processing : Utilize big data technologies such as Azure Databricks and Apache Spark to handle and analyze large datasets. Data Architecture : Design and optimize data models and architectures to meet business requirements. Performance Monitoring : Monitor and optimize the performance of data systems and pipelines. Collaboration : Collaborate with data scientists, analysts, and other stakeholders to support data-driven decision-making. Security and Compliance : Ensure data solutions comply with security and regulatory requirements. Technical Skills : Proficiency in Azure Data Factory, Azure Databricks, Azure SQL Database, and other Azure data tools. Analytical Skills : Strong analytical and problem-solving skills. Communication : Excellent communication and teamwork skills. Certifications : Relevant certifications such as Microsoft Certified: Azure Data Engineer Associate are a plus.

Posted 1 month ago

Apply

4.0 - 8.0 years

10 - 14 Lacs

Kolkata

Work from Office

Naukri logo

A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and costeffectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark , Scala, and Apache Spark. Handson experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (eg, SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (eg, Hadoop, Hive, HBase) is a plus. Mandatory skill set s Spark, Pyspark , Azure Education qualification B.Tech / M.Tech / MBA / MCA Education Degrees/Field of Study required Bachelor of Engineering, Master of Engineering, Master of Business Administration Degrees/Field of Study preferred Required Skills Data Science Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline

Posted 1 month ago

Apply

8.0 - 12.0 years

14 - 18 Lacs

Noida

Work from Office

Naukri logo

Design, develop, and optimize data pipelines and ETL processes using PySpark or Scala to extract, transform, and load large volumes of structured and unstructured data from diverse sources. Implement data ingestion, processing, and storage solutions on Azure cloud platform, leveraging services such as Azure Databricks, Azure Data Lake Storage, and Azure Synapse Analytics. Develop and maintain data models, schemas, and metadata to support efficient data access, query performance, and analytics requirements. Monitor pipeline performance, troubleshoot issues, and optimize data processing workflows for scalability, reliability, and costeffectiveness. Implement data security and compliance measures to protect sensitive information and ensure regulatory compliance. Requirement Proven experience as a Data Engineer, with expertise in building and optimizing data pipelines using PySpark , Scala, and Apache Spark. Handson experience with cloud platforms, particularly Azure, and proficiency in Azure services such as Azure Databricks, Azure Data Lake Storage, Azure Synapse Analytics, and Azure SQL Database. Strong programming skills in Python and Scala, with experience in software development, version control, and CI/CD practices. Familiarity with data warehousing concepts, dimensional modeling, and relational databases (eg, SQL Server, PostgreSQL, MySQL). Experience with big data technologies and frameworks (eg, Hadoop, Hive, HBase) is a plus. Mandatory skill set s Spark, Pyspark , Azure Education qualification B.Tech / M.Tech / MBA / MCA Education Degrees/Field of Study required Master of Business Administration, Master of Engineering, Bachelor of Engineering Degrees/Field of Study preferred Required Skills Microsoft Azure Optional Skills Accepting Feedback, Accepting Feedback, Active Listening, Analytical Reasoning, Analytical Thinking, Application Software, Business Data Analytics, Business Management, Business Technology, Business Transformation, Coaching and Feedback, Communication, Creativity, Documentation Development, Embracing Change, Emotional Regulation, Empathy, Implementation Research, Implementation Support, Implementing Technology, Inclusion, Intellectual Curiosity, Learning Agility, Optimism, Performance Assessment

Posted 1 month ago

Apply

5.0 - 10.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

We are looking for a Senior Software Engineer - Java to join and strengthen the App2App Integration team within SAP Business Data Cloud. This role is designed to accelerate the integration of SAP s application ecosystem with its unified data fabric, enabling low-latency, secure and scalable data exchange. You will take ownership of designing and building core integration frameworks that enable real-time, event-driven data flows between distributed SAP systems. As a senior contributor, you will work closely with architects to drive the evolution of SAP s App2App integration capabilities, with hands-on involvement in Java, ETL and distributed data processing, Apache Kafka, DevOps, SAP BTP and Hyperscaler platforms. Responsibilities: Design and develop App2App integration components and services using Java, RESTful APIs and messaging frameworks such as Apache Kafka. Build and maintain scalable data processing and ETL pipelines that support real-time and batch data flows. Integrate data engineering workflows with tools such as Databricks, Spark or other cloud-based processing platforms (experience with Databricks is a strong advantage). Accelerate the App2App integration roadmap by identifying reusable patterns, driving platform automation and establishing best practices. Collaborate with cross-functional teams to enable secure, reliable and performant communication across SAP applications. Build and maintain distributed data processing pipelines, supporting large-scale data ingestion, transformation and routing. Work closely with DevOps to define and improve CI/CD pipelines, monitoring and deployment strategies using modern GitOps practices. Guide cloud-native secure deployment of services on SAP BTP and major Hyperscaler (AWS, Azure, GCP). Collaborate with SAP s broader Data Platform efforts including Datasphere, SAP Analytics Cloud and BDC runtime architecture What you bring: bachelors or masters degree in Computer Science, Software Engineering or a related field. 5+ years of hands-on experience in backend development using Java, with strong object-oriented design and integration patterns. Hands-on experience building ETL pipelines and working with large-scale data processing frameworks. Experience or experimentation with tools such as Databricks, Apache Spark or other cloud-native data platforms is highly advantageous. Familiarity with SAP Business Technology Platform (BTP), SAP Datasphere, SAP Analytics Cloud or HANA is highly desirable. Design CI/CD pipelines, containerization (Docker), Kubernetes and DevOps best practices. Working knowledge of Hyperscaler environments such as AWS, Azure or GCP. Passionate about clean code, automated testing, performance tuning and continuous improvement. Strong communication skills and ability to collaborate with global teams across time zones

Posted 1 month ago

Apply

8.0 - 13.0 years

15 - 20 Lacs

Bengaluru

Work from Office

Naukri logo

We are seeking an Expert Software Engineer - Java to join and lead strategic initiatives within the App2App Integration team in SAP Business Data Cloud. This role is critical in accelerating the development and adoption of seamless, low-latency integration patterns across SAP applications and the BDC data fabric. As an expert-level engineer, you will drive architectural direction, oversee key integration frameworks and provide hands-on leadership in building real-time, event-driven and secure communication across a distributed enterprise landscape. you'll bring deep technical expertise in Java, ETL and distributed data processing, Kafka, cloud-native development and DevOps, while also mentoring teams and collaborating closely with stakeholders across SAP s data platform initiatives. Responsibilites : Lead and design App2App integration components and services using Java, RESTful APIs and messaging frameworks such as Apache Kafka. Architect and build scalable ETL and data transformation pipelines for both real-time and batch processing needs. Integrate data workflows with platforms such as Databricks, Apache Spark or other modern data engineering tools (Databricks experience is a strong advantage). Drive the evolution of reusable integration patterns, automation practices and platform consistency across services. Architect and build distributed data processing pipelines, supporting large-scale data ingestion, transformation and routing. Guide the DevOps strategy to define and improve CI/CD pipelines, monitoring and deployment strategies using modern GitOps practices. Guide cloud-native secure deployment of services on SAP BTP and major Hyperscaler (AWS, Azure, GCP). Collaborate with SAP s broader Data Platform efforts including Datasphere, SAP Analytics Cloud and BDC runtime architecture. Mentor junior developers, conduct code reviews and contribute to team-level architectural and technical decisions. What you bring: bachelors or masters degree in Computer Science, Software Engineering or a related field. 8+ years of hands-on experience in backend development using Java, with strong object-oriented design and integration patterns. Proven experience in designing and building ETL pipelines and large-scale data processing frameworks. Hands-on experience or experimentation with Databricks, Spark or other data engineering platforms is highly desirable. Proficient with SAP Business Technology Platform (BTP), SAP Datasphere, SAP Analytics Cloud or HANA is highly desirable. Design CI/CD pipelines, containerization (Docker), Kubernetes and DevOps best practices. Strong working knowledge of Hyperscaler environments such as AWS, Azure or GCP. Strong track record of driving engineering excellence, innovation and scalable architecture within complex enterprise systems

Posted 1 month ago

Apply

6.0 - 11.0 years

18 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

As part of the CE Data Platform team, your role will involve establishing a clear vision for data engineering practices, harmoniously aligning it with the data architecture. Collaboration with product managers is essential to comprehend business requirements and identify opportunities for data leverage. This position also entails the responsibility of creating, designing, and developing complex data processing pipelines. A cooperative relationship with the data intelligence team is necessary for designing scalable implementations and production of data models. The role involves writing clean, iterative code, utilizing various continuous delivery practices to deploy, support, and operate data pipelines. The selection of suitable data modeling techniques, optimization and design of physical data models, and understanding of the trade-offs between various data modeling techniques, form an integral part of this role. Job Qualifications : You are passionate about data, possessing the ability to build and operate data pipelines, and maintain data storage within distributed systems. This role requires a deep understanding of data modeling and experience with modern data engineering tools and platforms, along with cloud warehousing tools. It is perfect for individuals who can go deep into coding and leading junior members to implement a solution. Experience in defining and implementing data governance and security policies is crucial. Knowledge of DevOps and the ability to navigate all the phases of the data & release life cycle is also essential. Professional Skills: You are familiar with AWS and Azure Cloud. You have extensive knowledge of Snowflake , SnowPro Core certification is a must have. You have used DBT at least in one project to deploy models in production. You have configured and deployed Airflow and integrated various operator in airflow (especially DBT & Snowflake). You can design build, release pipelines and understand of Azure DevOps Ecosystem. You have excellent understanding of Python (especially PySpark) and able to write metadata driven programs. Familiar with Data Vault (Raw , Business) also concepts like Point In Time , Semantic Layer. You are resilient in ambiguous situations and can clearly articulate the problem in a business friendly way. You believe in documenting processes and managing the artifacts and evolving that over time. Good to have skills: You have experience with data visualization techniques and can communicate the insights as per the audience. Experience with Terraform and Hashicorp Vault highly desirable. Knowledge of docker and Streamlit is a big plus.

Posted 1 month ago

Apply

Exploring Data Processing Jobs in India

The data processing job market in India is thriving with opportunities for job seekers in the field. With the growing demand for data-driven insights in various industries, the need for professionals skilled in data processing is on the rise. Whether you are a fresh graduate looking to start your career or an experienced professional looking to advance, there are ample opportunities in India for data processing roles.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Delhi
  4. Pune
  5. Hyderabad

These major cities in India are actively hiring for data processing roles, with a multitude of job opportunities available for job seekers.

Average Salary Range

The average salary range for data processing professionals in India varies based on experience and skill level. Entry-level positions can expect to earn between INR 3-6 lakh per annum, while experienced professionals can earn upwards of INR 10 lakh per annum.

Career Path

A typical career path in data processing may include roles such as Data Analyst, Data Engineer, Data Scientist, and Data Architect. As professionals gain experience and expertise in the field, they may progress from Junior Data Analyst to Senior Data Analyst, and eventually to roles such as Data Scientist or Data Architect.

Related Skills

In addition to data processing skills, professionals in this field are often expected to have knowledge of programming languages such as Python, SQL, and R. Strong analytical and problem-solving skills are also essential for success in data processing roles.

Interview Questions

  • What is data processing? (basic)
  • Explain the difference between data cleaning and data transformation. (medium)
  • How do you handle missing data in a dataset? (medium)
  • What is the importance of data normalization in data processing? (medium)
  • Can you explain the process of feature selection in machine learning? (advanced)
  • How do you evaluate the performance of a machine learning model? (advanced)
  • What is the difference between supervised and unsupervised learning? (basic)
  • How do you deal with outliers in a dataset? (medium)
  • Explain the concept of dimensionality reduction. (medium)
  • What is the bias-variance tradeoff in machine learning? (advanced)
  • How would you handle a dataset with high dimensionality? (medium)
  • Can you explain the process of clustering in data processing? (medium)
  • What is the role of regularization in machine learning? (advanced)
  • How do you assess the quality of a machine learning model? (medium)
  • Can you explain the concept of overfitting in machine learning? (basic)
  • What is the difference between classification and regression in machine learning? (basic)
  • How do you select the right algorithm for a machine learning task? (medium)
  • Explain the process of data preprocessing in machine learning. (medium)
  • How do you handle imbalanced datasets in machine learning? (medium)
  • What is the purpose of cross-validation in machine learning? (medium)
  • Can you explain the difference between batch processing and real-time processing? (medium)
  • How do you handle categorical data in a dataset? (basic)
  • What is the role of data visualization in data processing? (basic)
  • How do you ensure data security and privacy in data processing? (medium)
  • What are the advantages of using cloud computing for data processing? (medium)

Closing Remark

As you explore opportunities in the data processing job market in India, remember to prepare thoroughly for interviews and showcase your skills and expertise confidently. With the right combination of skills and experience, you can embark on a successful career in data processing in India. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies