Jobs
Interviews

4894 Data Processing Jobs - Page 34

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 - 3.0 years

0 Lacs

kanchipuram, tamil nadu

On-site

Are you a recent graduate looking to kickstart your career We are currently seeking Data Processors to join our expanding team in Kanchipuram! This is an excellent opportunity for freshers who are keen on working with data, honing their typing skills, and gaining fundamental data analysis expertise. If you are enthusiastic about learning and progressing professionally, we warmly welcome you to be part of our team. Your responsibilities will include: - Reading and comprehending handwritten documents to extract precise data. - Carefully analyzing written data to ensure accurate interpretation before input. - Inputting data accurately into spreadsheets (Excel or similar software) with a keen focus on precision and attention to detail. - Adhering to guidelines to maintain consistent data entry and analysis. - Collaborating with the team to verify data accuracy and completeness. - Meeting deadlines and ensuring timely data entry and analysis. What We Offer: - Comprehensive training on reading and analyzing handwritten documents. - Opportunities for career advancement within the organization. - Supportive colleagues and training to facilitate your success right from the outset. Requirements: - Proficient and precise typing skills (minimum 20 words per minute). - Basic familiarity with Microsoft Office, particularly Excel (Training will be provided if needed). - Exceptional attention to detail and strong organizational abilities. - Capacity to work independently and quick learning aptitude. - Residency in or near Kanchipuram. Work Schedule: - Morning Shift: Monday to Saturday, 6:00 AM - 2:00 PM Education: - Bachelors Degree in any field (Fresh graduates are encouraged to apply) Job Types: - Full-time, Permanent, Fresher Application Question(s): - Do you possess basic computer knowledge - Are you a fresher Location: - Kanchipuram, Tamil Nadu (Mandatory) Work Location: - On-site (Note: This job description is written based on the provided details and does not contain any headers.),

Posted 2 weeks ago

Apply

0.0 - 4.0 years

0 Lacs

vadodara, gujarat

On-site

You are invited to join our team as a Data Entry Operator, where your attention to detail and motivation will be valued. As an ideal candidate, you should have a good understanding of computers and Microsoft applications, coupled with the ability to learn quickly and hold a graduate degree. Your responsibilities will include accurately entering data into our company databases, using Microsoft tools like Excel and Word for data processing, and ensuring the correctness of entered information by checking for errors and duplicates. It will be essential to maintain precise records while upholding confidentiality and managing your workload efficiently to meet deadlines. Collaboration with team members and other departments may be necessary, along with conducting thorough data searches to ensure the reliability and comprehensiveness of information. Research tasks such as finding and organizing relevant data will also be part of your duties. Additionally, you will be expected to handle messaging for data collection and requests promptly. To qualify for this role, you should hold a graduate degree in any relevant field, be adept at PC operations and software usage, and exhibit proficiency in Microsoft Office applications, particularly Excel and Word. A detail-oriented approach, commitment to accuracy, and quick adaptability to evolving technologies are crucial attributes. Understanding the significance of data security and confidentiality, along with strong typing skills and prior data entry experience, will be advantageous. You will thrive in our fast-paced office environment by efficiently meeting deadlines, working collaboratively, and demonstrating a proactive mindset. We are excited to welcome enthusiastic and motivated individuals to our team. For further inquiries or to express interest, please reach out to us at: Email: hr.unicrop@gmail.com Phone: +91 6351057338 Job Types: Full-time, Permanent, Fresher Benefits: - Flexible schedule - Leave encashment Work Location: In person,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

alwar, rajasthan

On-site

As a Credit Officer, your primary responsibility will be to verify whether all loan applications are assessed in accordance with the credit policy, and to ensure that any deviations are appropriately mitigated and documented. You will engage with customers through personal discussions and interviews, ensuring timely processing of all files while fostering strong relationships. Collaboration with the sales and operations teams will be essential for accurate data collection. Additionally, you will oversee the security creation process for secured loans, guarantee compliance with KYC guidelines mandated by the RBI, and evaluate credit applications to ensure adherence to credit parameters. Your role will also involve relationship management, working closely with sales and relationship managers to facilitate proper documentation and address any audit queries. Proficiency in data processing using computer spreadsheets, the ability to assess clients without audited financials, and a comprehensive understanding of various risk dimensions such as operational, credit, and market risks are crucial for success in this role. A willingness and capability to travel within the city will also be required to fulfill the responsibilities effectively.,

Posted 2 weeks ago

Apply

0.0 - 3.0 years

0 Lacs

ahmedabad, gujarat

On-site

Apexon is a digital-first technology services firm with over 27 years of experience, dedicated to accelerating business transformation and providing human-centric digital experiences. Specializing in User Experience, Engineering, and Data services, the company helps clients in BFSI, healthcare, and life sciences sectors to outperform their competitors through speed and innovation. As a Trainee at Apexon, you will be responsible for crucial tasks such as data annotation, auditing data sets, and enhancing data quality in our AI/ML and engineering projects. Your role will involve performing data annotation and labeling, auditing datasets for quality and accuracy, providing insights for performance enhancement, generating basic reports, and collaborating with cross-functional teams. Additionally, you will support AI/ML model training by ensuring accurate data processing. This position is based in Ahmedabad, with a requirement for on-site work for 5 days a week. The employment type for this role is Contract-to-Hire (C2H), and we are looking for UG/PG Graduates from the batch of 2019 to 2024. The ideal candidate should have at least 6 months of experience, and immediate joiners are preferred. If you are passionate about technology and eager to kickstart your career in a dynamic environment, this Full-time position with day shift schedule at Apexon could be the perfect opportunity for you. Join us in person and be a part of our innovative and growth-oriented team.,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

hyderabad, telangana

On-site

The Big Data Developer role is integral to our organization, involving the design, construction, and management of extensive data processing systems. Your primary responsibility will be to convert raw data into valuable insights that drive strategic decision-making throughout various departments. Working closely with data analysts, data scientists, and stakeholders, you will develop scalable data pipelines, enhance database architecture, and maintain data quality and accessibility. As businesses increasingly rely on data for decision-making, your expertise in handling large volumes of structured and unstructured data will be pivotal in helping the organization gain a competitive advantage through analytics. This position demands a blend of technical proficiency and a deep understanding of business operations to create data-driven solutions that cater to our diverse client base. Key Responsibilities - Develop and maintain scalable data processing pipelines. - Integrate data from multiple sources into a unified database structure. - Optimize current data systems for improved performance and scalability. - Analyze extensive datasets to identify trends and offer insights. - Design and implement ETL processes for data transformation and loading. - Collaborate with data scientists and analysts to refine data requirements. - Write and optimize SQL queries for data retrieval and manipulation. - Utilize Hadoop frameworks for managing and processing big data. - Monitor system performance, troubleshoot issues in real-time. - Implement data security and governance measures. - Participate in code reviews, maintain programming standards. - Develop documentation for data architecture and processes. - Stay abreast of emerging technologies in the big data field. - Contribute to cross-functional team projects and initiatives. - Provide training and support to team members on best practices. Required Qualifications - Bachelor's degree in Computer Science, Information Technology, or related field. - Demonstrated experience as a Big Data Developer or in a similar capacity. - Proficiency in programming languages like Java and Python. - Hands-on experience with big data tools such as Hadoop, Spark, and Kafka. - Strong SQL skills and familiarity with NoSQL databases like MongoDB. - Knowledge of cloud platforms like AWS, Azure, or Google Cloud. - Understanding of data warehousing concepts and design. - Strong problem-solving abilities and analytical thinking. - Experience with data integration and ETL tools. - Familiarity with data visualization tools is advantageous. - Ability to work under pressure, meet tight deadlines. - Excellent communication skills for effective team collaboration. - Capacity to adapt to new technologies, continuous learning. - Exposure to Agile methodologies and development processes. - Understanding of data governance and compliance standards. - Attention to detail and strong organizational skills. Skills: data security, agile methodologies, data visualization, team collaboration, AWS, data processing, big data, Azure, data governance, Hadoop, problem-solving, NoSQL, SQL, Spark, Google Cloud, ETL, MongoDB, data warehousing, Java, Kafka, data integration, data modeling, Python,

Posted 2 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

bengaluru, karnataka, india

On-site

NVIDIA has been transforming computer graphics, PC gaming, and accelerated computing for more than 25 years. It's a unique legacy of innovation that's fueled by great technology-and amazing people. Today, we're tapping into the unlimited potential of AI to define the next era of computing. An era in which our GPU acts as the brains of computers, robots, and self-driving cars that can understand the world. Doing what's never been done before takes vision, innovation, and the world's best talent. As an NVIDIAN, you'll be immersed in a diverse, supportive environment where everyone is inspired to do their best work. Come join the team and see how you can make a lasting impact on the world. We're looking for a versatile and highly motivated Software Engineerto join our team. In this role, you'll play a crucial part in enhancing our engineering capabilities across the entire software development lifecycle. You'll be instrumental in developing and optimizing our build, test, and release processes, ensuring the quality and stability of our productsthrough comprehensive testing, and deriving actionable insights from system telemetry. This position requires a strong problem-solver who thrives in a collaborative environment and is passionate about driving efficiency and quality through automation and data. What you'll be doing: Build, Release & Infrastructure Automation:Design, implement, and maintain robust build flows for embedded software automate complex release processes and manage underlying infrastructure for continuous integration and continuous delivery (CI/CD) pipelines. Troubleshoot build failures and infrastructure issues and optimize CI/CD workflows for efficiency and reliability. Comprehensive Testing & OS Vetting:Develop and enhance automated frameworks for System-on-Chip (SOC) validation, including daily sanity and regression testing. Integrate testing scripts into CI/CD pipelines to ensure continuous quality. Perform thorough sanity testing of various System Windows and Linux hardware and software components, ensuring high-quality releases. Develop comprehensive regression reports and scale stress/smoke testing on device farms. Data Analytics & Telemetry:Implement metrics collection and analytics systems to monitor software, build quality, and performance. Analyze telemetry and log data from distributed systems to identify patterns, anomalies, and derive actionable insights that guide development priorities and improve product quality. What we need to see: Bachelor's or Master's degree in Computer Science, Computer Engineering, Data Science, or a related technical field (or equivalent experience). 5+years of professionalexperienceas a Software Engineer, with significant contributions in at least one, and preferably more, of the following areas: Build / DevOps:Experience designing and maintaining CI/CD pipelines, build systems, and infrastructure automation. Software Quality Assurance / Test Automation:Proficiency in developing automated test frameworks, writing comprehensive test suites, and performing system/OS validation. Data Engineering / Systems Analysis:Experience with data ingestion, processing, analysis, and visualization from telemetry and logs to derive insights. Automation:Python, Bash/Shell scripting, C/C++, pytest, robot or similar. CI/CD& build Tools:Jenkins, GitLab CI,Make, CMake or similar Operating Systems:Windows (ETW, WMI), Linux (systemd journal, syslog, dmesg) Ways to stand out in the crowd Strong programming skills in Python, Bash, or other automation scripting languages. Experience with CI/CD tools(e.g., Jenkins, GitLab CI) and Version Control Systems(Git, P4). Understanding of software testing methodologies and experience with test automation frameworks(e.g., pytest, Robot Framework, Unittest). Experience with cloud platforms (AWS, Azure) and/or observability stacks (e.g., OpenSearch/Elasticsearch, Kibana) for data processing, log aggregation, and visualization. Working knowledge of Windows OS, Linux, and their respective diagnostic and monitoring tools (e.g., ETW, systemd journal, dmesg, eBPF).

Posted 2 weeks ago

Apply

4.0 - 7.0 years

6 - 12 Lacs

gurugram

Hybrid

- Lead & support data entry team - Ensure accuracy, audits & compliance - Coordinate with Payroll/IT/Product teams - 4+ yrs exp, 1+ yr team mgmt - Strong payroll/data knowledge is a plus

Posted 2 weeks ago

Apply

2.0 - 3.0 years

3 - 6 Lacs

noida

Work from Office

1- 3 years of exp in verification process (employment, hospitals, or related industries). 2- Strong B2B exp with proven ability to manage client expectations. 3- Exposure to software-based product or SaaS platforms. 4- Data verification standards.

Posted 2 weeks ago

Apply

7.0 - 12.0 years

16 - 20 Lacs

bengaluru

Work from Office

We are seeking for a 47-year experience AI engineer with a strong background in machine learning, programming skills, and a deep understanding of generative models. The position is responsible for turning research into practical solutions that address real-world problems while ensuring the reliability and ethical use of generative AI in their applications. Technical Requirements: Strong proficiency in Python for data processing and automation. Handson experience with generative AI models and their integration into data workflows. Handson experience with prompt engineering and LLM models (Opensource and Closesource) Handson experience with Application development framework like LangChain, LangGraph etc. Familiarity working with REST frameworks like Fast API, Angular, Flask and DJango. Experience with cloud platforms (AWS, GCP, Azure) and related services is a plus. Familiarity with containerization and orchestration tools (Docker, Kubernetes). As a Data Analysis & Simulation Professional, the person will be responsible for: Data Pipeline Development: Design and implement scalable data pipelines using Python to ingest, process, and transform log data from various sources. Generative AI Integration: Collaborate with data scientists to integrate generative AI models into the log analysis workflow. Develop APIs and services to deploy AI models for real-time log analysis and insights generation. Data Monitoring and Maintenance: Set up monitoring and alerting systems to ensure the reliability and performance of data pipelines. Troubleshoot and resolve issues related to data ingestion, processing, and storage. Collaboration and Documentation: Work closely with cross-functional teams to understand requirements and deliver solutions that meet business needs. Document data pipeline architecture, processes, and best practices for future reference and knowledge sharing. Evaluation and Testing: Conduct thorough testing and validation of generative models. Research and Innovation: Stay updated with the latest advancements in generative AI and explore innovative techniques to enhance model capabilities. Experiment with different architectures and approaches. Snowflake Utilization: (Good to have) Design and optimize data storage and retrieval strategies using Snowflake. Implement data modeling, partitioning, and indexing strategies to enhance query performance.

Posted 2 weeks ago

Apply

7.0 - 10.0 years

16 - 20 Lacs

bengaluru

Work from Office

We are seeking for a 7-10 years experience AI engineer with a strong background in machine learning, programming skills, and a deep understanding of generative models. The position is responsible for turning research into practical solutions that address real-world problems while ensuring the reliability and ethical use of generative AI in their applications. Technical Requirements: Strong proficiency in Python for data processing and automation. Handson experience with generative AI models and their integration into data workflows. Handson experience with prompt engineering and LLM models (Opensource and Closesource) Handson experience with Application development framework like LangChain, LangGraph etc. Familiarity working with REST frameworks like Fast API, Angular, Flask and DJango. Experience with cloud platforms (AWS, GCP, Azure) and related services is a plus. Familiarity with containerization and orchestration tools (Docker, Kubernetes). As a Data Analysis & Simulation Professional, the person will be responsible for: Data Pipeline Development: Design and implement scalable data pipelines using Python to ingest, process, and transform log data from various sources. Generative AI Integration: Collaborate with data scientists to integrate generative AI models into the log analysis workflow. Develop APIs and services to deploy AI models for real-time log analysis and insights generation. Data Monitoring and Maintenance: Set up monitoring and alerting systems to ensure the reliability and performance of data pipelines. Troubleshoot and resolve issues related to data ingestion, processing, and storage. Collaboration and Documentation: Work closely with cross-functional teams to understand requirements and deliver solutions that meet business needs. Document data pipeline architecture, processes, and best practices for future reference and knowledge sharing. Evaluation and Testing: Conduct thorough testing and validation of generative models. Research and Innovation: Stay updated with the latest advancements in generative AI and explore innovative techniques to enhance model capabilities. Experiment with different architectures and approaches. Snowflake Utilization: (Good to have) Design and optimize data storage and retrieval strategies using Snowflake. Implement data modeling, partitioning, and indexing strategies to enhance query performance.

Posted 2 weeks ago

Apply

8.0 - 11.0 years

20 - 25 Lacs

hyderabad

Work from Office

Leverage deep expertise in database management and optimization to ensure high performance and reliability of our data systems. Identify bottlenecks and performance issues within data pipelines, optimize query performance, data access, and overall data processing. Design, deploy, and manage complex data systems in cloud environments (e.g., AWS, Azure, GCP) using tools such as Terraform and adhering to AWS well architected Framework. Develop and implement complex automation solutions using tools such as Jenkins and scripts to streamline data operations and enhance efficiency. Architect and manage enterprise HA and DR solutions to ensure business continuity and data availability. Expertly analyze and optimize database performance, identifying and resolving bottlenecks. Ensure adherence to cloud security best practices and compliance standards, protecting sensitive data and systems. Manage complex incidents, troubleshoot issues, and implement effective solutions to maintain data integrity and system reliability. Demonstrate leadership skills, mentor junior team members, and foster a collaborative and communicative team environment. Skills: Leadership: Proven leadership skills with the ability to inspire and motivate a team. Project Management: Proven experience in managing complex database projects. Database Management Systems: Expertise in one or more relational database management systems. Security and Compliance: In-depth knowledge of database security principles and compliance requirements. Performance Tuning: Advanced skills in monitoring and optimizing database performance. Team Collaboration: Effective collaboration with cross-functional teams and departments. Vendor Management: Experience in engaging with database technology vendors. Problem-Solving: Strong problem-solving skills, especially in resolving complex database issues. Communication: Excellent communication skills for conveying technical information to both technical and non-technical stakeholders. Strategic Planning: Ability to contribute to the development of the organizations overall database strategy.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

bengaluru

Work from Office

5+years of experience in Data Science OR relevant PhD + 1 years of industry experience. Knowledge of machine learning algorithms focusing on regression, classifications, clustering, probability networks, association rules, deep neural networks, and reinforcement learning. Extensive experience with analytical and quantitative problem-solving. Experience with common analysis tools such as jupyter, open source or commercially available libraries. Programming knowledge in at least one language such as Python , along with excellent SQL skills. Hands-on experience with Python data science libraries such as Pandas, NumPy, Scikit-learn, TensorFlow, PyTorch, and Matplotlib/Seaborn etc. Familiarity with ML lifecycle management tools such as Kubeflow and MLflow is a strong plus. Experience in data modeling on Big Query, Hive/Spark/Hadoop/Kafka ecosystem. Knowledge of database modeling and data warehousing principles The impact you will create: Applying cutting-edge machine learning algorithms and statistical models to uncover meaningful insights from large and complex datasets. Constructing predictive models that utilize historical data to forecast trends and anticipate future outcomes with accuracy and precision. Collecting, organizing, analyzing, and interpreting all data and drawing insightful conclusions which enable us to work in a smarter way. Create visual interpretations from data and explain graphs and charts with insightful notes and summaries. Analyzing user data and assisting product development in finding new innovative ways of presenting and making use of data. Collaborating with other BUs to identify business problems and develop data-driven solutions. Support our management team in identifying, measuring, and following up on key metrics. Providing regular, accurate, and comprehensive statistical reports. Providing objective insight and analysis to influence decision making. Ensuring quality of data and actively working on cleaning data to make sure of top notch relevance and accuracy. Actively keep up to date on external market and data research and work towards adding data points from external sources to our own data in order to create a value added analysis.

Posted 2 weeks ago

Apply

15.0 - 20.0 years

20 - 25 Lacs

bengaluru

Work from Office

Shape the future of our Developer Experience and Process strategy Increase developer productivity by using analyzing, designing, and implementing innovative development processes Hands-on development of critical infrastructure components. Decompose complex problems into simple, straightforward solutions, providing mechanisms for the teams to prioritize ruthlessly and move with urgency . Demonstrate excellence resulting in scalable systems and services with the highest quality architecture and design. Dive deep into critical system issues, proactively addressing similar root causes, and raise the bar on Operational Excellence. Collaborate with other Coupang tech leaders to make the service extensible to unlock opportunities for innovations. Essential Qualifications: At least 15+ years in software development, with a strong record of delivering impactful, large-scale systems. Deep expertise in distributed systems, storage, data processing, and server technologies. One who is fluent in one or more among C++, Java, Go, Javascript and Typescript. Preferred Qualifications : One who has experience in CI/CD pipelines, Jfrog artifactory, Yaml and AWS cloud. One who has experience in Kubernetes, gRPC, Spring boot. One who has experience in concurrency, multi-threading, synchronization, and non-blocking IO. One who has deep understanding of operating system kernel and distributed system such as Kafka, Cassandra and Spark. Ability to handle multiple competing priorities in a fast-paced environment and leading the delivery of large-scale services for complex business offerings.

Posted 2 weeks ago

Apply

15.0 - 20.0 years

13 - 18 Lacs

bengaluru

Work from Office

Lead architecture design for scalable, extensible, and reliable systems in Post Purchase & LFS. Align technology roadmaps with leadership and stakeholders. Remove technical blockers and simplify complex problems into actionable solutions. Decompose complex problems into simple, straightforward solutions, providing mechanisms for the teams to prioritize ruthlessly and Move with Urgency. Write code when needed to guide teams and support high-priority initiatives. Identify business and tech opportunities; evaluate and adopt new technologies. Recruit, mentor, and grow top-tier engineering talent. Essential Qualifications: At least 15+ years in software development, with a strong record of delivering impactful, large-scale systems. 7+ years building and operating backend services and web applications. Deep expertise in distributed systems, storage, data processing, and server technologies. Strong coding skills in Java and Spring Framework; object-oriented design. Experience driving improvements in code quality Additionally, experience with: AWS-based service development and operations Frontend frameworks (React, Angular) ORM tools (JPA, Hibernate), domain modeling NoSQL and relational/columnar databases Git, CI/CD automation, and Agile methodologies Logistics-related service development

Posted 2 weeks ago

Apply

2.0 - 5.0 years

16 - 18 Lacs

mumbai

Work from Office

Key ResponsibilitiesUnderstand business requirements by engaging with business teams Data extraction from valuable data sources & automating data collection process Data processing, cleaning and validating integrity of data to be used for analysis Exploratory data analysis to identify trends and patterns in large amount of data Build machine learning based models using algorithms and statistical techniques like Regression, Decision trees, Boosting etc Present insights using data visualization techniques Propose solutions and strategies to various complex business challengesBuild GenAI models using RAG frameworks for chatbots, summarisation etc Develop model deployement pipline using Lambda, ECS etcSkills & AttributesKnowledge of statistical programming languages like R, Python and database query languages like SQL and statistical tests like distributions, regression etc Experience in data visualization tools like tableau, QlikSense etc Ability to write comprehensive reports, with an analytical mind and inclination for problem-solving Exposure in advanced techniques like GenAI, neural networks, NLP, image and speech processing Ability to engage with stakeholders to understand business requirements and convert the same into technical problems for solution development and deployment

Posted 2 weeks ago

Apply

1.0 - 6.0 years

3 - 8 Lacs

pune

Work from Office

1 year of experience developing and training machine learning models using structured and unstructured data. Experience with LLMs, transformers, and generative AI (e.g., GPT, Claude, etc.). Knowledge of AI/LLM stacks such as LangChain, LangGraph, and vector databases. Exposure to conversational interfaces, AI copilots, and agent-based automation. Experience with deploying models in production environments (e.g., MLflow). Strong programming skills in Python (with libraries TensorFlow, PyTorch, scikit-learn, NumPy, etc.). Solid understanding of machine learning fundamentals, deep learning architectures, natural language processing, and computer vision. Knowledge of data processing frameworks (e.g., Pandas, Spark) and databases (SQL, NoSQL). Job Category: Artificial intelligence Job Type: Full Time Job Location: Pune Apply for this position Allowed Type(s): .pdf, .doc, .docx By using this form you agree with the storage and handling of your data by this website. *

Posted 2 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

noida

Work from Office

At Times Internet, we create premium digital products that simplify and enhance the lives of millions. As India s largest digital products company, we have a significant presence across a wide range of categories, including News, Sports, Fintech, and Enterprise solutions. Our portfolio features market-leading and iconic brands such as TOI, ET, NBT, Cricbuzz, Times Prime, Times Card, Indiatimes, Whatshot, Abound, Willow TV, Techgig and Times Mobile among many more. Each of these products is crafted to enrich your experiences and bring you closer to your interests and aspirations. As an equal opportunity employer, Times Internet strongly promotes inclusivity and diversity. We are proud to have achieved overall gender pay parity in 2018, verified by an independent audit conducted by Aon Hewitt. We are driven by the excitement of new possibilities and are committed to bringing innovative products, ideas, and technologies to help people make the most of every day. Join us and take us to the next level! About the Business Unit (ET B2B): The Economic Times B2B Verticals (ETB2B) is a leading business media platform under Times Internet Limited, serving 23+ industry and functional domains including Auto, Energy, Pharma, Retail, and HR. With a monthly reach of over 8 million professionals, ETB2B delivers curated content, newsletters, and premium conferences to drive decision-making, learning, and networking. It also operates global editions and platforms like ET Masterclass and vConfex to meet evolving digital needs. The ET B2B Intent Signal Platform is a new initiative designed to turn high-value traffic into buying signals for enterprise vendors by combining digital behavior, enrichment tools, and survey-based data from ET events Key Responsibilities: Build robust ETL/ELT pipelines to ingest, clean, transform, and aggregate massive volumes of data from multiple sources including web behavior data, third-party APIs, CRM data, and more. Create and train predictive models to identify buying intent signals from unstructured and structured data by leveraging NLP techniques, pattern recognition, and behavioral analytics. Utilize state-of-the-art NLP libraries (spaCy, Transformers), ML frameworks (Scikit-learn, TensorFlow/PyTorch), and APIs (OpenAI) to enhance data processing and feature engineering. Work with backend engineers/end-to-end software teams to deploy ML models into production environments; closely partner with product managers and data scientists to iterate on model tuning and feature prioritization. Ensure data is accurate, fresh, and compliant with security policies such as GDPR, SOC 2, and other industry standards. Implement monitoring and alerting for data pipeline health. Continuously improve data pipeline efficiency and ML model inference speed to support real-time or near real-time buyer intent scoring. Maintain clear technical documentation and best practices for data workflows, model deployment, and AI experiments. Required Skills and Qualifications Expertise in writing clean, modular, and optimized Python code for data manipulation, ML pipeline development, and model integration. Hands-on experience with ETL tools (Airflow, Prefect), data processing libraries (Pandas, Dask), and workflow orchestration frameworks. Practical knowledge of supervised/unsupervised learning algorithms, NLP techniques like entity extraction, sentiment analysis, text classification, and familiarity with libraries like spaCy, NLTK, and transformers. Experience deploying ML models using containerization technologies (Docker, Kubernetes) and cloud infrastructures (AWS Sagemaker, GCP AI Platform, or Azure ML). Experience using relational databases (PostgreSQL, MySQL) and large-scale data stores (Redshift, BigQuery, Snowflake) for batch and streaming analytics. Ability to create dashboards and reports or work alongside data analysts to communicate model insights effectively. Knowledge of version control (Git), CI/CD pipelines, unit testing, and collaborative agile workflows. Education and experience: Bachelor s or Master s degree in Computer Science, Data Science, Statistics, or a related quantitative discipline. 2 5 years of professional experience in data engineering, applied ML, or software engineering roles, preferably within SaaS, Martech, or B2B data analytics domains.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

pune

Work from Office

":" About the Role We are looking for a skilled Data Engineer to join our team and play a key role in building and maintaining robust data pipelines that power AI-driven solutions. The ideal candidate will have strong expertise in data processing, ETL workflows, and real-time streaming, ensuring high-quality, reliable data for AI agents. Key Responsibilities Clean, organize, and prepare data for AI agents to consume. Design, develop, and maintain ETL processes to ensure smooth data flow. Generate reports on data quality and implement continuous improvements. Identify and integrate structured, unstructured, and streaming data sources relevant to AI tasks. Apply data cleaning, labeling, and normalization techniques. Implement validation, deduplication, and error-handling mechanisms for data integrity. Collaborate with cross-functional teams to ensure data availability and reliability. Requirements Required Skills & Qualifications Programming: Proficiency in Python and SQL. Big Data Frameworks: Strong knowledge of Apache Spark and Apache Flink. Streaming Platforms: Experience with Apache Kafka for real-time data processing. Workflow Orchestration: Hands-on experience with Apache Airflow. Data Handling: Familiarity with both structured and unstructured data. Data Quality: Experience implementing data quality checks for AI-driven systems. Cloud Platforms: Working experience with AWS, GCP, or Azure. Preferred Qualifications Experience with containerization tools like Docker and Kubernetes. Knowledge of data lake and data warehouse architectures. Exposure to AI/ML data pipelines. ","Work_Experience":"2-5 years","Job_Type":"Full time","Job_Opening_Name":"Data Engineer" , "State":"Maharashtra" , "Country":"India" , "Zip_Code":"411001" , "id":"86180000008188699" , "Publish":true , "Date_Opened":"2025-08-26" , "Keep_on_Career_Site":true}]);

Posted 2 weeks ago

Apply

2.0 - 5.0 years

4 - 7 Lacs

pune

Work from Office

KPI Partners is seeking a skilled Databricks Specialist to join our team. The ideal candidate will possess a strong background in data engineering, analytics, and machine learning with substantial experience in the Databricks platform. Key Responsibilities: - Design, develop, and maintain scalable data pipelines and ETL processes on Databricks. - Collaborate with data scientists and analysts to support analytics initiatives using Databricks and Apache Spark. - Optimize data engineering workflows for performance and cost efficiency. - Monitor and troubleshoot data processing jobs and workflows to ensure high availability and reliability. - Implement and maintain data governance and security measures on Databricks. - Provide technical guidance and support to team members on Databricks best practices and performance tuning. - Stay updated with the latest trends in data engineering, big data, and cloud technologies. Qualifications: - Bachelors or Masters degree in Computer Science, Information Technology, or a related field. - Proven experience in working with Databricks, Apache Spark, and big data technologies. - Strong programming skills in languages such as Python, Scala, or SQL. - Familiarity with cloud platforms such as AWS, Azure, or Google Cloud. - Experience with data visualization tools and frameworks. - Excellent problem-solving skills and the ability to work independently as well as part of a team. Preferred Qualifications: - Databricks certification or relevant big data certifications. - Experience with machine learning libraries and frameworks. - Knowledge of data warehousing solutions and methodologies. If you are passionate about data and possess a deep understanding of Databricks and its capabilities, we encourage you to apply for this exciting opportunity with KPI Partners. Join us in our mission to leverage data for impactful decision-making.

Posted 2 weeks ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

bengaluru

Work from Office

Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Design and implement ETL/ELT pipelines using AWS services such as Lambda, Glue, EMR, and Step Functions Develop scalable data processing workflows using Python and/or PySpark Work with AWS Data Lake and Data Warehouse (e.g., S3, Redshift, Athena, Lake Formation) to ingest, transform, and store structured and semistructured data Optimize data pipelines for performance, reliability, and costefficiency Collaborate with crossfunctional teams in a Scrum/Agile setup to deliver sprint goals Ensure data quality, lineage, and governance through automated validation and monitoring Maintain and enhance CI/CD pipelines for data workflows Document technical designs, data flows, and operational procedures Develops, and delivers largescale data ingestion, data processing, and data transformation projects on the Azure cloud Mentors and shares knowledge with the team to provide design reviews, discussions and prototypes Works with customers to deploy, manage, and audit standard processes for cloud products. Mandatory skill sets AWS Lambda, Glue, S3, Hands on experience with Pyspark/Python on AWS, Preferred skill sets Other AWS services like Redshift, Athena, and other services strong understanding of data lake and data warehouse architectures on AWS Experience with Airflow, Terraform, or CloudFormation Certification on AWS Years of experience required 3 to 12 years Education qualification BE, B.Tech, ME, M,Tech, MBA, MCA (60% above) Education Degrees/Field of Study required Bachelor of Engineering, Bachelor of Technology, MBA (Master of Business Administration) Degrees/Field of Study preferred Required Skills Data Engineering Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Travel Requirements Available for Work Visa Sponsorship

Posted 2 weeks ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

gurugram

Work from Office

Join Teleperformance Where Excellence Meets Opportunity! Teleperformance is a leading provider of customer experience management, offering premier omnichannel support to top global companies. Our diverse service locations, including on-site and work-at-home programs, ensure flexibility and broad reach. Why Choose Teleperformance We emphasize the importance of our employees, fostering enduring relationships within our teams and communities. Our dedication to employee satisfaction distinguishes us. Utilize advanced support technologies and processes engineered to achieve outstanding results. We cultivate lasting client relationships and make positive contributions to our local communities. Become Part of an Exceptional Team! Join Teleperformance, where our world-class workforce and innovative solutions drive success. Experience a workplace that values your development, supports your goals, and celebrates your accomplishments. Job Description "General Information Technology work involves managing or performing work across multiple areas of an organization s overall IT Platform/Infrastructure including analysis, development, and administration of: IT Systems Software, Hardware, and Databases Data & Voice Networks Data Processing Operations End User Technology & Software Support Conducts cost/benefit analyses for proposed IT projects as input to the organization s IT roadmap. Experienced Specialist in one specialized discipline as well as having a thorough understanding of related disciplines. Will most often be a driving force behind the development of new solutions for programs, complex projects, processes or activities. Serves as final decision/opinion maker in the area, coaches, mentors and trains others on the area of expertise. Ensures the implementation of short to medium term activities within the business area OR support sub-function in the context of the strategy for the department. Ensures appropriate policies, processes & standards are developed and implemented to support short to medium term tactical direction. Leads a team of Specialists , sometimes with several hierarchical levels, with full employee lifecycle responsibility. "

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

bengaluru

Work from Office

Not Applicable Specialism Data, Analytics & AI Management Level Senior Associate & Summary In data engineering at PwC, you will focus on designing and building data infrastructure and systems to enable efficient data processing and analysis. You will be responsible for developing and implementing data pipelines, data integration, and data transformation solutions. & Summary A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. Responsibilities Build interactive dashboards using Power BI Develop custom apps with Power Apps or automate workflows using Power Automate Collaborate with crossfunctional teams in an Agile/Scrum setup Engage with clients to gather requirements, present solutions, and deliver value Ensure data accuracy, performance optimization, and visual storytelling best practices Mandatory skill sets Power BI, Power Automate , Power Apps DAX, Power Query Strong SQL skills Experience of working on a cloud environment AWS/Azure Preferred skill sets Microsoft certifications (PL300, PL100, PL400) Years of experience required 3 to 12 years Education qualification BE, B.Tech, ME, M,Tech, MBA, MCA Education Degrees/Field of Study required Bachelor Degree, Master Degree Degrees/Field of Study preferred Required Skills Data Visualization Accepting Feedback, Accepting Feedback, Active Listening, Agile Scalability, Amazon Web Services (AWS), Analytical Thinking, Apache Airflow, Apache Hadoop, Azure Data Factory, Communication, Creativity, Data Anonymization, Data Architecture, Database Administration, Database Management System (DBMS), Database Optimization, Database Security Best Practices, Databricks Unified Data Analytics Platform, Data Engineering, Data Engineering Platforms, Data Infrastructure, Data Integration, Data Lake, Data Modeling, Data Pipeline {+ 27 more} Travel Requirements Available for Work Visa Sponsorship

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

ahmedabad

Work from Office

Key Responsibilities Design, deliver and maintain the appropriate data solution to provide the correct data for analytical development to address key issues within the organization. Gather detailed data requirements and collaborate with a cross-functional team to deliver high quality results (such as Tableau developers, analysts, business users). Develop and operate modern data architecture approaches to meet key business objectives and provide end-to-end data solutions. Contribute to generic frameworks for data ingestion, data processing, and data integrations per business requirements. Extend DevOps capabilities for deploying data solutions. Supports creation and adoption of data assets for the organization, putting data quality as the focus of all data solutions. Act as a data owner and functional data SME for the assigned area. Lead the coaching of all junior data engineering resources. Ensure effective and timely delivery of project work, raising issues and risks in a timely manner to project manager to ensure appropriate actions can be taken to mitigate. Ensure effective and timely delivery of project work. Enforce best practices for data design, architecture, and implementation. Qualifications & Experience Strong experience with cloud services within Azure, AWS, or GCP platforms (preferably Azure) Strong experience with analytical tools (preferably SQL, dbt, Snowflake, BigQuery, Tableau) Experience with design and software programming development (preferably Python, Javascript and/or R) Experience building APIs (experience with GraphQL is a plus) Hands-on experience with DevOps and CI/CD processes Experience in Agile methodologies (certifications preferred) Project management experience and strong communication skills Self-Starter, driven with high business process acumen Team Player with a positive attitude and ability to work across different business stakeholder and technical teams to accomplish complex tasks Professional Attributes Communication Skills o At Kraft Heinz you ll easily be exposed to senior management, no matter your level. Therefore, it s important you have excellent communication skills, to deal with all kinds of different stakeholders. Analytical o We re a very data driven company. You know how to translate complex data into a simple solution with your analytical mindset. Curiosity, positivity & enthusiasm o You re curious, positive and enthusiastic. People know you as the driver of the team. Project management skills o Time management has no secrets for you. You re organized, structured and always have an overview of all the deliverables. You know how to bring multiple projects to a successful ending within the given timeframe. Team player o Achieving results is nice, but achieving results with the team is simply the best. You re a team player, which means you re sometimes a leader, sometimes a follower but always working towards the same common goal together with your teammates. Location(s) Ahmedabad - Venus Stratum GCC

Posted 2 weeks ago

Apply

3.0 - 6.0 years

5 - 8 Lacs

bengaluru

Work from Office

MOR4JP00022914 Python Devops Role: Python DevOpsYou will be developing tooling required for the execution of big data processing and advanced data analytics pipelinesProfile:Junior / Intermediate DeveloperAutomationAutomated TestingRequired:PythonLinux Shell ScriptingUnit Testing, Integration TestingStrong SQL or equivalent NoSQL skillsGradleBuild and release automation, CI/CD tools such as Jenkins, AWS, open stack, and Azure DevOpsNice to have:AI/ML ExperiencePython virtual environmentsScala / JavaSpark / HadoopJupyter Notebooks Azure Snowflake MLOps Databricks Experience working with Data ScientistsJob description:Encouraging and building automated processes wherever possiblePython and shell programming in the context of DevOps automationTesting and debugging applicationsDeveloping back-end components

Posted 2 weeks ago

Apply

3.0 - 8.0 years

5 - 10 Lacs

bengaluru

Work from Office

Were seeking top talents for our AI engineering team to develop high-quality machine learning models, services, and scalable data processing pipelines. Candidate should have a strong Machine Learning background with experience in managing a team of machine learning engineers and focus on taking ML models to production. As an Applied AI ML Lead within the Digital Intelligence team at JPMorgan, you will lead a team of ML engineers with focus on all aspects of ML architecture, infra selection, model training, fine-tuning, ablation studies, and productization. This role requires strong technical expertise, leadership skills, and the ability to collaborate across teams. The ideal candidate has a proven track record in managing teams, delivering production grade ML models across cloud and on-device environment, and making informed decisions that align with business objectives and deliver tangible value. Job Responsibilities Research and develop machine learning algorithms to solve complex problems related to personalized financial services in retail and digital banking domains. Work closely with cross-functional teams to translate business requirements into technical solutions and drive innovation in banking products and services. Collaborate with product managers, key business stakeholders, engineering, and platform partners to lead challenging projects that deliver cutting-edge machine learning-driven digital solutions. Stay up-to-date with the latest publications in relevant Machine Learning domains and find applications for the same in your problem spaces for improved outcomes. Communicate findings and insights to stakeholders through presentations, reports, and visualizations. Work with regulatory and compliance teams to ensure that machine learning models adhere to standards and regulations. Mentor juniors in delivering successful projects and building successful careers in the firm. Participate and contribute back to firm-wide Machine Learning communities through patenting, publications, and speaking engagements. Required qualifications, capabilities and skills Minimum of 3+ years of experience as an Engineering Manager leading ML teams MS or PhD degree in Computer Science, Statistics, Mathematics or Machine learning related field. Expert in at least one of the following areas Natural Language Processing, Graph Learning, Reinforcement Learning, Ranking and Recommendation Deep knowledge in Data structures, Algorithms, Machine Learning, Data Mining, Information Retrieval, Statistics. Demonstrated expertise in machine learning frameworks Tensorflow, Pytorch, pyG, Keras, MXNet, Scikit-Learn. Strong programming knowledge of python, spark; Strong grasp on vector operations using numpy, scipy; Strong grasp on distributed computation using Multithreading, Multi GPUs, Dask, Ray, Polars etc. Strong analytical and critical thinking skills for problem solving. Excellent written and oral communication along with demonstrated teamwork skills. Demonstrated ability to clearly communicate complex technical concepts to both technical and non-technical audiences Experience in working in interdisciplinary teams and collaborating with other researchers, engineers, and stakeholders. A strong desire to stay updated with the latest advancements in the field and continuously improve ones skills Preferred qualifications, capabilities and skills 8+ (MS) or 5+ (PhD) years of relevant experience, with atleast 3 years in leadership roles. Deep hands-on experience with real-world ML projects, either through academic research, internships, or industry roles. Experience with distributed data/feature engineering using popular cloud services like AWS EMR Experience with large scale training, validation and testing experiments. Experience with cloud Machine Learning services in AWS like Sagemaker.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies