Jobs
Interviews

4894 Data Processing Jobs - Page 21

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

25 - 30 Lacs

pune

Work from Office

Cohesity is the leader in AI-powered data security. Over 13,600 enterprise customers, including over 85 of the Fortune 100 and nearly 70% of the Global 500, rely on Cohesity to strengthen their resilience while providing Gen AI insights into their vast amounts of data. Formed from the combination of Cohesity with Veritas enterprise data protection business, the company s solutions secure and protect data on-premises, in the cloud, and at the edge. Backed by NVIDIA, IBM, HPE, Cisco, AWS, Google Cloud, and others, Cohesity is headquartered in Santa Clara, CA, with offices around the globe. We ve been named a Leader by multiple analyst firms and have been globally recognized for Innovation, Product Strength, and Simplicity in Design , and our culture . Want to join the leader in AI-powered data security We are looking for a proactive and experienced Senior Benefits Analyst to join our Total Rewards team in India, supporting employee benefits programs across the Asia Pacific (APAC) region. This role will play a critical part in the design, administration, and optimization of benefit offerings, ensuring they are competitive, compliant, and aligned with both local regulations and global strategy. The successful candidate will bring a strong understanding of regional benefits landscapes, analytical expertise, and stakeholder management experience. HOW YOU LL SPEND YOUR TIME HERE 1. Regional Benefits Operations & Support Administer and manage employee benefit programs across multiple APAC countries (e.g., India, Singapore, Japan, Korea, Australia, Thailand, Hong Kong). Act as a subject matter expert for local and regional benefits policies, including statutory and supplementary plans. Coordinate with payroll, HR business partners, and finance to ensure accurate benefits-related transactions. 2. Compliance & Governance Monitor and ensure compliance with local labor laws, tax regulations, and statutory requirements across all supported countries. Prepare and maintain audit-ready documentation and reports for internal and external audits. 3. Vendor & Broker Management Manage relationships with regional and local vendors, insurance carriers, and brokers. Support RFP processes, renewals, SLAs, and performance reviews for benefit providers across APAC. 4. Data Analysis & Reporting Analyze benefits utilization, cost trends, and employee participation rates to identify areas for improvement or cost containment. Create dashboards, benchmarking analyses, and executive summaries for regional HR and global Total Rewards teams. 5. Employee Experience & Communication Develop and deliver benefits communication plans tailored to each APAC market. Support employee queries and escalations in a timely and professional manner. Collaborate with internal communications teams to promote benefit plan awareness and engagement. 6. Process Improvement & Strategy Identify opportunities for process automation, efficiency improvements, and program enhancements. Contribute to regional and global benefits strategy projects and initiatives, including wellbeing, equity, and inclusion in benefits design. WE D LOVE TO TALK TO YOU IF YOU HAVE MANY OF THE FOLLOWING Bachelor s degree in Human Resources, Business Administration, Finance, or related field. 8 12 years of progressive experience in employee benefits or total rewards, with a minimum of 4 years supporting multiple countries in the APAC region. Strong understanding of statutory and market-competitive benefits in key APAC markets. Proficiency in Microsoft Excel and data analytics; experience with HRIS platforms (e.g., Workday) is preferred. Excellent communication and interpersonal skills; ability to work with diverse, cross-cultural teams. Ability to manage multiple priorities and deliver high-quality outcomes under tight deadlines. Demonstrated ability to leverage AI tools to enhance productivity, streamline workflows, and support decision making. Preferred Qualifications: Experience working in a multinational or regional shared services environment. Exposure to global benefits harmonization projects or M&A-related benefits transitions. Why Join Us Lead benefits initiatives that impact employees across a dynamic and diverse region. Collaborate with global experts and be part of a forward-thinking HR organization. Competitive salary, performance-based incentives, and regional exposure. A strong culture of learning, inclusion, and career growth

Posted 1 week ago

Apply

10.0 - 15.0 years

40 - 45 Lacs

noida

Work from Office

As a Senior Data Architect on our Global Data & Advanced Analytics team, you will be the visionary and designer of our data ecosystem. You ll leverage your expertise in AWS cloud-native technologies and big data platforms to build scalable data solutions that empower advanced analytics and AI-driven products. This is a senior role for a hands-on leader who can both strategize at the enterprise level and dive into technical design. You will collaborate with cross-functional teams to ensure our data architecture is robust, secure, and aligned with business needs, enabling Alight s mission to provide insightful, real-time solutions in health, wealth, and human capital. Responsibilities: Design and Strategy : Work with data architecture team to define data architecture blueprint s for our products, including data flow diagrams, system integrations, and storage solutions. Continuously refine the architecture to meet evolving business requirements and to incorporate new AWS capabilities and industry best practices. Drive adoption of and oversee adherence to architecture and standards . Cloud Data Platform Development : Lead the development of our cloud-based data platform on AWS. Implement data pipelines and warehouses using AWS services e.g., AWS Glue for ETL, AWS Lambda for serverless processing, Amazon Redshift for data warehousing, and S3 for data storage. Ensure that data is efficiently extracted, transformed, and loaded to support AI, automation, and analytics & reporting needs. Big Data & Legacy Integration : Oversee the ingestion of large-scale datasets from various sources (transactional systems, APIs, external . Optimize processing of big data using Spark and integrate legacy Hadoop-based data into our AWS environment. Data Modeling: Develop and maintain data models (conceptual, logical, physical) for our databases and data lakes. Design relational schemas and dimensional models that cater to both operational applications and analytical workloads. Ensure data is organized for easy access and high performance (for example, optimizing Redshift schema design and using partitioning or sort keys appropriately). Advanced Analytics Enablement : Work closely with Data Science and Analytics teams to enable AI and advanced analytics. Provide well-structured data sets and create pipelines that feed machine learning models (e.g., customer personalization models, predictive analytics). Implement mechanisms to handle real-time streaming data (using tools like Kinesis or Kafka if needed) and ensure data quality and freshness for AI use cases. Efficiency and Scalability : Design efficient, scalable processes for data handling. This includes optimizing ETL jobs (monitoring and tuning Glue/Spark jobs), implementing incremental data loading strategies instead of full loads where possible, and ensuring our data infrastructure can scale to growing data volumes. You will continually seek opportunities to automate manual data management tasks and improve pipeline reliability (CI/CD for data pipelines). Data Governance & Security : Embed data governance into the architecture implement data cataloging, lineage tracking, and governance policies. Ensure compliance with data privacy and security standards: implement access controls, encryption (at-rest and in-transit), and data retention policies aligned with Alight and client requirements. Work with the InfoSec team to perform regular audits of data access and to support features like data masking or tokenization for sensitive information. Collaboration and Leadership : Collaborate with other technology leadership and architects, product managers, business analysts, and engineering leads to understand data needs and translate them into technical solutions. Provide technical leadership to data engineers set development standards, guide them in choosing the right tools/approaches, and conduct design/code reviews. Lead architecture review sessions and be the go-to expert for any questions on data strategy and implementation. Innovation and Thought Leadership: Stay abreast of emerging trends in data architecture, big data, and AI. Evaluate and recommend new technologies or approaches (for example, evaluate the use of data lakehouses , graph databases, or new AWS analytics services). Provide thought leadership on how Alight can leverage data for competitive advantage, and pilot proof-of-concepts for new ideas . Required Qualifications: Experience : 10+ years (preferred 15+ years) of experience in data architecture, data engineering, or related fields, with a track record of designing and implementing large-scale data solutions. Demonstrated experience leading data-centric projects from concept to production. Cloud & Big Data Expertise : Hands-on expertise with AWS data services especially AWS Glue, Lambda, Redshift, and S3. Proficiency in designing data pipelines and warehousing solutions on AWS is a must . Strong experience with big data technologies including Hadoop and Spark; able to optimize heavy data processing jobs and troubleshoot performance issues in distributed data systems. Data Modeling & Warehousing : Exceptional skills in data modeling and database design. Able to design dimensional or normalized schemas. Deep understanding of SQL and proficiency in writing and tuning complex queries. Experience building and maintaining a enterprise data warehouse or data lake, including partitioning strategies, indexing, and query optimization. Programming & Scripting : Proficiency in programming for data engineering Python (or Scala/Java) for ETL/ELT scripting, and solid SQL skills for data manipulation and analysis. Experience with infrastructure-as-code (Terraform/CloudFormation) and CI/CD pipelines for deploying data infrastructure is a plus. Analytics and AI Orientation : Knowledge of machine learning concepts and experience supporting data science teams. You should understand how to prepare data for modeling and have experience with one or more tools or frameworks for data analysis . Experience with real-time data streaming and processing (Kinesis, Kafka, or similar) is a plus, as is exposure to AI/ML services (like Amazon SageMaker or Bedrock ). Leadership & Soft Skills : Excellent communication skills with an ability to explain complex architectures in simple terms. Experience collaborating in cross-functional teams and leading technical initiatives. Proven ability to mentor junior engineers and to establish best practices in code quality, documentation, and data pipeline design. A problem-solving mindset and the flexibility to work in a fast-paced, evolving environment. Education: Bachelor s degree in Computer Science , Information Systems, or a related field required . (Master s degree in a relevant field is a plus.) Certifications : (Preferred) AWS Certified Solutions Architect or AWS Certified Data Analytics certification. Any big data or database certifications (Cloudera Data Platform, Oracle/SQL Server certs, etc.) will be a plus and reinforce your expertise in the field.

Posted 1 week ago

Apply

20.0 - 25.0 years

50 - 100 Lacs

hyderabad

Work from Office

Summary 20 years of untapped data waiting to be explored. The digital revolution is changing everything, especially in pharmaceuticals, and Novartis has embraced a bold strategy to drive a company- wide digital transformation. With this mission, Data Operations group within BSI is working with the Novartis Business units and industry leading technology partners, to drive the execution of Novartis data strategy, turning its rich data resources into real strategic assets to drive actionable insights across the organization. This ambition is one of key pillars in the broader digital transformation happening at Novartis to be a medicines and data science company. We are looking for a skilled and enthusiastic Data Engineer with expertise in any of the ETL tools, Databricks, Snowflake SQL, and PySpark to join our innovative team. As a Data Engineer, you will be responsible for designing, implementing, and optimizing scalable data pipelines, ensuring data quality, and building robust data infrastructure. You will collaborate closely with data scientists, domain experts, and other stakeholders to ensure the efficient and reliable flow of data across the organization. About the Role Key Responsibilities: ETL Development: Design, implement, and maintain scalable ETL pipelines to extract, transform, and load data from various sources into data warehouses or data lakes. Databricks Utilization: Leverage Databricks to develop, optimize, and scale data engineering workflows. Data Modelling: Design and implement robust data models to support analytics and reporting needs. SQL Proficiency: Write, optimize, and maintain complex SQL queries for data extraction and transformation tasks. PySpark Development: Utilize PySpark for big data processing tasks, developing scalable and efficient solutions. Data Quality and Validation: Implement data quality checks and validation processes to ensure data accuracy and consistency. Data Integration: Integrate data from multiple sources, ensuring data consistency and reliability. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver solutions that meet business needs. Documentation: Create and maintain comprehensive documentation for data pipelines, data models, and related processes. Performance Optimization: Optimize data processing performance by fine- tuning ETL pipelines and data storage solutions. Security and Compliance: Ensure compliance with data governance policies and security protocols to protect sensitive and confidential information. Continuous Improvement: Stay current with industry trends and emerging technologies, continuously improving data engineering practices. Education (minimum/desirable): Bachelors or master s degree in computer science, Engineering, Information Systems, or a related field. Experience / Skills : 8+ years experience in a Global company as a data steward, engineer, modeler or data scientist, with a strong focus on ETL tools, SQL, and PySpark. Business understanding of pharmaceutical industry and data standards. Domain experience in at least one of the following areas a) Pharma R&D, b) Manufacturing, Procurement and Supply Chain and c) Marketing and Sales. Experience in working in Pharma / Life Science industry strongly preferred Strong proficiency in SQL and experience with database management systems (e. g. , PostgreSQL, MySQL, Oracle). Hands- on experience with Databricks for developing and optimizing data workloads. Proficiency in PySpark for big data processing tasks. Knowledge of data warehousing solutions (e. g. Snowflake). Experience of working with large codebase / repos using Git / Bitbucket Strong problem- solving skills and attention to detail. Excellent communication and collaboration skills with senior stakeholders Preferred Qualifications : Experience with cloud platforms (e. g. , AWS, Google Cloud, Azure) and their data services (Databricks). Familiarity with containerization and orchestration tools (e. g. , Docker, Kubernetes). Knowledge of real- time data processing and streaming technologies. Proficiency in data visualization tools (e. g. , Tableau, Power BI). Experience with DevOps practices and tools for CI/CD pipelines.

Posted 1 week ago

Apply

2.0 - 5.0 years

2 - 2 Lacs

bengaluru

Work from Office

We are looking for a detail-oriented and organized Data Entry Coordinator to join our team. The ideal candidate will be responsible for accurate data entry, record maintenance, and coordinating with internal teams to ensure smooth workflow.

Posted 1 week ago

Apply

2.0 - 5.0 years

2 - 2 Lacs

bengaluru

Work from Office

We are looking for a detail-oriented and organized Data Entry Coordinator to join our team. The ideal candidate will be responsible for accurate data entry, record maintenance, and coordinating with internal teams to ensure smooth workflow.

Posted 1 week ago

Apply

5.0 - 7.0 years

7 - 9 Lacs

chennai

Work from Office

Job Title: Data Scientist Position Summary: - As a Data Scientist at FSL, you will leverage your expertise in Machine Learning, Deep Learning, Computer Vision, Natural Language Processing and Generative AI to develop innovative data-driven solutions and applications. You will play a key role in designing and deploying dynamic models and applications using modern web frameworks like Flask and FastAPI, ensuring efficient deployment and ongoing monitoring of these systems. Key Responsibilities: - Model Development and Application: Design and implement advanced ML and DL models. Develop web applications for model deployment using Flask and FastAPI to enable real-time data processing and user interaction. - Data Analysis: Perform exploratory data analysis to understand underlying patterns, correlations and trends. Develop comprehensive data processing pipelines to prepare large datasets for analysis and modeling. - Generative AI: Employ Generative AI techniques to create new data points, enhance content generation and innovate within the field of synthetic data production. - Collaborative Development: Work with cross-functional teams to integrate AI capabilities into products and systems. Ensure that all AI solutions are aligned with business goals and user needs. - Research and Innovation: Stay updated with the latest developments in AI, ML, DL, CV and NLP. Explore new technologies and methodologies that can impact our products and services positively. - Communication: Effectively communicate complex quantitative analysis in a clear, precise and actionable manner to senior management and other departments. Required Skills and Qualifications: - Education: BE or Master s or PhD in Computer Science, Data Science, Statistics or a related field. - Experience: 5 to 7 years of relevant experience in a data science role with a strong focus on ML, DL and statistical modeling. - Technical Skills: Strong coding skills in Python, including experience with Flask or FastAPI. Proficiency in ML, DL frameworks (e.g., PyTorch, TensorFlow), CV (e.g., OpenCV) and NLP libraries (e.g., NLTK, spaCy). - Generative AI: Experience with generative models such as GANs, VAEs or Transformers. - Deployment Skills: Experience with Docker, Kubernetes and continuous integration/continuous deployment (CI/CD) pipelines. - Strong Analytical Skills: Ability to translate complex data into actionable insights. - Communication: Excellent written and verbal communication skills. - Certifications: Certifications in Data Science, ML or AI from recognized institutions is added advantage.

Posted 1 week ago

Apply

5.0 - 7.0 years

20 - 27 Lacs

mumbai

Work from Office

Job Title: Data Scientist About Firstsource Firstsource Solutions Limited, an RP-Sanjiv Goenka Group company (NSE: FSL, BSE: 532809, Reuters: FISO.BO, Bloomberg: FSOL:IN), is a specialized global business process services partner, providing transformational solutions and services spanning the customer lifecycle across Healthcare, Banking and Financial Services, Communications, Media and Technology, Retail, and other diverse industries. With an established presence in the US, the UK, India, Mexico, Australia, South Africa, and the Philippines, we make it happen for our clients, solving their biggest challenges with hyper-focused, domain-centered teams and cutting-edge tech, data, and analytics. Our real-world practitioners work collaboratively to deliver future-focused outcomes. Position Summary: As a Data Scientist at FSL, you will leverage your expertise in Machine Learning, Deep Learning, Computer Vision, Natural Language Processing and Generative AI to develop innovative data-driven solutions and applications. You will play a key role in designing and deploying dynamic models and applications using modern web frameworks like Flask and FastAPI, ensuring efficient deployment and ongoing monitoring of these systems. Key Responsibilities: - Model Development and Application: Design and implement advanced ML and DL models. Develop web applications for model deployment using Flask and FastAPI to enable real-time data processing and user interaction. Hands on expeience in LLM fine tuning. - Data Analysis: Perform exploratory data analysis to understand underlying patterns, correlations and trends. Develop comprehensive data processing pipelines to prepare large datasets for analysis and modeling. - Generative AI: Employ Generative AI techniques to create new data points, enhance content generation and innovate within the field of synthetic data production. - Collaborative Development: Work with cross-functional teams to integrate AI capabilities into products and systems. Ensure that all AI solutions are aligned with business goals and user needs. - Research and Innovation: Stay updated with the latest developments in AI, ML, DL, CV and NLP. Explore new technologies and methodologies that can impact our products and services positively. - Communication: Effectively communicate complex quantitative analysis in a clear, precise and actionable manner to senior management and other departments. Required Skills and Qualifications: - Education: BE or Master s or PhD in Computer Science, Data Science, Statistics or a related field. - Experience: 5 to 7 years of relevant experience in a data science role with a strong focus on ML, DL and statistical modeling. - Technical Skills: Strong coding skills in Python, including experience with Flask or FastAPI. Proficiency in ML, DL frameworks (e.g., PyTorch, TensorFlow), CV (e.g., OpenCV) and NLP libraries (e.g., NLTK, spaCy). - Generative AI: Experience with generative models such as GANs, VAEs or Transformers. - Deployment Skills: Experience with Docker, Kubernetes and continuous integration/continuous deployment (CI/CD) pipelines. - Strong Analytical Skills: Ability to translate complex data into actionable insights. - Communication: Excellent written and verbal communication skills. - Certifications: Certifications in Data Science, ML or AI from recognized institutions is added advantage. Location: Hyderabad, Mumbai, Bangalore and Chennai Disclaimer: Firstsource follows a fair, transparent, and merit-based hiring process. We never ask for money at any stage. Beware of fraudulent offers and always verify through our official channels or @firstsource.com email addresses.

Posted 1 week ago

Apply

8.0 - 10.0 years

16 - 18 Lacs

hyderabad

Work from Office

This role involves developing robust data pipelines, implementing scalable data architectures, and establishing data governance frameworks to support data-driven decision making across the organization. Essential functions Design and implement efficient, scalable data pipelines Optimize data storage and retrieval processes Ensure data quality and consistency across systems Collaborate with cross-functional teams to understand data requirements Implement data security and compliance measures Qualifications 8-10 years of experience in data engineering or related field Expert knowledge of SQL Mid-level knowledge of Python (expert preferred) Experience with cloud platforms, particularly Google Cloud Strong understanding of data modeling and warehouse concepts Experience with ETL pipeline design and implementation Experience with big data technologies (Hadoop, Spark) Would be a plus Knowledge of data governance and security practices Experience with real-time data processing Familiarity with BI tools and reporting platforms Strong background in performance optimization and tuning Advanced debugging and troubleshooting skills We offer Opportunity to work on bleeding-edge projects Work with a highly motivated and dedicated team Competitive salary Flexible schedule Benefits package - medical insurance, sports Corporate social events Professional development opportunities Well-equipped office

Posted 1 week ago

Apply

1.0 - 2.0 years

3 - 7 Lacs

hyderabad

Work from Office

Are you an organized business AnalystDo you have experience with market data/contract/vendor management We re looking for a Market Data Procurement Analyst to: support the Global Research and Evidence Lab team s day to day market data procurement processes, along with Contract Management, Invoicing, Cost Management and other ad hoc projects provide exceptional service to supported business units throughout the procurement process work with business units to track, manage and maintain data contracts in an organized manner raise, manage and monitor sourcing requests on behalf of Global Research and Evidence Lab business units You ll be working in the Global Research and Evidence Lab team in Hyderabad. Global Research and Evidence Lab is a is a very dynamic, fast-paced and creative team taking advantage of the latest technology and data processing techniques. Global Research and Evidence Lab is charged with helping clients make better decisions through Research and data. As a Market Data Procurement Analyst, you ll play an important role in in a small team focused on day-to-day management of vendor contracts, master service agreements, NDAs, purchase orders, and other related business management items. You have: university degree in legal studies or related field e. g. economics or general business management minimum 1 to 2 years of experience in a business analyst role, ideally in a market data related function, with demonstrated abilities in general project management experience managing projects with an eye toward improving service while satisfying business users needs ideally, you have experience drafting, negotiating and reviewing Contracts ( MSAs, SOWs, NDAs) and other procurement related items advanced MS office skills You are: highly organized and pro-active excellent verbal and written communication skills and fluent in English innovative with great attention to detail ready to join a fast-growing team in a dynamic and challenging environment able to multi-task, have sense of urgency and focus on deadlines inquisitive, organized and able to work independently collaborative, able to work in tandem with different business stakeholders in different regions

Posted 1 week ago

Apply

4.0 - 8.0 years

15 - 16 Lacs

pune

Work from Office

We are looking for data engineers who have the right attitude, aptitude, skills, empathy, compassion, and hunger for learning. Build products in the data analytics space. A passion for shipping high-quality data products, interest in the data products space; curiosity about the bigger picture of building a company, product development and its people. Roles and Responsibilities Develop and manage robust ETL pipelines using Apache Spark (Scala) Understand Spark concepts, performance optimization techniques, and governance tools Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform, and load data from various systems to the Enterprise Data Warehouse/Data Lake/Data Mesh hosted on AWS or Azure Collaborate cross-functionally to design effective data solutions Implement data workflows utilizing AWS Step Functions or Azure Logic Apps for efficient orchestration. Leverage AWS Glue and Crawler or Azure Data Factory and Data Catalog for seamless data cataloging and automation Monitor, troubleshoot, and optimize pipeline performance and data quality Maintain high coding standards and produce thorough documentation. Contribute to high-level (HLD) and low-level (LLD) design discussions Technical Skills Minimum 4 years of progressive experience building solutions in Big Data environments Have a strong ability to build robust and resilient data pipelines which are scalable, fault tolerant, and reliable in terms of data movement 3+ years of hands-on expertise in Python, Spark, and Kafka Strong command of AWS or Azure services Strong hands-on capabilities on SQL and NoSQL technologies Sound understanding of data warehousing, modeling, and ETL concepts Familiarity with High-Level Design (HLD) and Low-Level Design (LLD) principles Excellent written and verbal communication skills

Posted 1 week ago

Apply

3.0 - 6.0 years

9 - 10 Lacs

pune

Work from Office

Coditas is a new-age, offshore product development organization, offering services pertaining to the entire software development life cycle. Headquartered in Pune, Coditas works with clients across the globe. We attribute our organic growth to an engineering-driven culture and steadfast philosophies around writing clean code, designing intuitive user experiences, and letting the work speak for itself. Roles and Responsibilities Develop and manage robust ETL pipelines using Apache Spark (Scala) Understand Spark concepts, performance optimization techniques, and governance tools Develop a highly scalable, reliable, and high-performance data processing pipeline to extract, transform, and load data from various systems to the Enterprise Data Warehouse/Data Lake/Data Mesh hosted on AWS or Azure Collaborate cross-functionally to design effective data solutions Implement data workflows utilizing AWS Step Functions or Azure Logic Apps for efficient orchestration. Leverage AWS Glue and Crawler or Azure Data Factory and Data Catalog for seamless data cataloging and automation Monitor, troubleshoot, and optimize pipeline performance and data quality Maintain high coding standards and produce thorough documentation. Contribute to high-level (HLD) and low-level (LLD) design discussions Technical Skills Minimum 3 years of progressive experience building solutions in Big Data environments Have a strong ability to build robust and resilient data pipelines which are scalable, fault tolerant, and reliable in terms of data movement 3+ years of hands-on expertise in Python, Spark, and Kafka Strong command of AWS or Azure services Strong hands-on capabilities on SQL and NoSQL technologies Sound understanding of data warehousing, modeling, and ETL concepts Familiarity with High-Level Design (HLD) and Low-Level Design (LLD) principles Excellent written and verbal communication skills

Posted 1 week ago

Apply

1.0 - 3.0 years

2 - 6 Lacs

chennai

Work from Office

Develop and maintain reports, dashboards, and visualizations for business stakeholders. Utilize SQL to extract and manipulate data from various sources. Apply advanced Excel functions and techniques for data analysis and reporting. Use Python for data automation, scripting, and advanced analytics. Work with analytical tools to support business decision-making. Collaborate with different teams to understand data requirements and provide meaningful solutions. Ensure data accuracy, integrity, and efficiency in reporting. Assist in backend data management and optimization if required. "> Apply Now> Required Skills & Qualifications: Proven experience in data analysis and reporting. Expert proficiency in Microsoft Excel (including Power Query, PivotTables, and advanced formulas). Strong SQL skills for data extraction and manipulation. Proficiency in Python for data processing and automation. Experience with analytical and reporting tools such as Power BI, Tableau, or similar. Strong problem-solving skills with a keen understanding of business data needs. Ability to work independently and in a team-oriented environment. Excellent communication and presentation skills. Experience in backend data management (preferred but not mandatory)

Posted 1 week ago

Apply

3.0 - 12.0 years

8 - 12 Lacs

hyderabad

Work from Office

Education: Bachelor s degree in computer science, Information Technology, or a related field. Experience: 8-12 years in Java development, with 3+ years of experience mentoring or coaching individual contributors. Technical Expertise: Java (Spring Boot, Hibernate), microservices architecture, RESTful APIs, SQL/NoSQL databases. Cloud & DevOps: Experience with Azure, CI/CD pipelines, and version control (Git, Jenkins, Kubernetes, Docker). Agile & Collaboration: Strong understanding of Agile methodologies, team leadership, and cross-functional collaboration. Problem-Solving & Security: Proven ability to design and deploy secure applications in a regulated environment. Preferred Qualifications: Payment Tokenization and MDES: Proven experience with payments tokenization systems and a deep understanding of the Mastercard Digital Enablement Service (MDES), including its APIs and integration patterns. Secure Credential Provisioning: Expertise in secure credential provisioning, token management, and compliance with industry standards like EMVCo and PCI DSS. Azure Cloud Expertise: Proficient in using Azure services like Azure Functions, Azure Key Vault, and Azure App Service to build scalable and resilient payment processing applications. A strong grasp of cloud security principles and best practices for handling sensitive data is essential. DevOps & Automation: Hands-on experience with Jenkins, Kubernetes, and cloudbased CI/CD pipelines. Full-Stack Exposure: Familiarity with front-end frameworks like React or Angular. Big Data & Messaging Systems: Knowledge of Kafka, Hadoop, and distributed data processing tools.

Posted 1 week ago

Apply

7.0 - 10.0 years

12 - 16 Lacs

hyderabad

Work from Office

SnowFlake Data Engineering (SnowFlake, DBT & ADF) Technical Lead (Experience: 7+ to 10 Years) We are looking for a highly self-motivated individual with SnowFlake Data Engineering (SnowFlake, DBT & ADF) Technical Lead: At least 7+ years of experience in designing and developing Data Pipelines & Assets. Must have experience with at least one Columnar MPP Cloud data warehouse (Snowflake/Azure Synapse/Redshift) for at least 5 years. Experience in ETL tools like Azure Data factory, Fivetran / DBT for 4 years. Experience with Git and Azure DevOps. Experience in Agile, Jira, and Confluence. Solid understanding of programming SQL objects (procedures, triggers, views, functions) in SQL Server. Experience optimizing SQL queries a plus. Working Knowledge of Azure Architecture, Data Lake. Willingness to contribute to documentation (e.g., mapping, defect logs). Generate functional specs for code migration or ask right questions thereof. Hands on programmer with a thorough understand of performance tuning techniques. Handling large data volume transformations (order of 100 GBs monthly). Able to create solution / data flows to suit requirements. Produce timely documentation e.g., mapping, UTR, defect / KEDB logs etc. Self-starter & learner. Able to understand and probe for requirements. Tech experience expected. Primary: Snowflake, DBT (development & testing). Secondary: Python, ETL or any data processing tool. Nice to have - Domain experience in Healthcare. Should have good oral and written communication. Should be a good team player. Should be proactive and adaptive.

Posted 1 week ago

Apply

0.0 years

1 - 2 Lacs

anakapalle, visakhapatnam, vizianagaram

Work from Office

International BPO Back office Work At Vizag Freshers Non Voice English Speaking 12K In hand CTC 17K Night Shifts With Cab Facility Back office Work Chat WhatsApp CV 7696517846 Register for call Back www.callcenterjobs.anejabusinessgroup.com Required Candidate profile Night Shifts with cab facility Back office , Non Voice Process at Vizag 12K Salary In Hand CTC 18K , Freshers , Graduates , WhatsApp CV 7696517846 Register www.callcenterjobs.anejabusinessgroup.com Perks and benefits Non Voice Graduates Freshers 12K Salary Vizag bpo

Posted 1 week ago

Apply

0.0 - 2.0 years

3 - 4 Lacs

mohali, chandigarh, panchkula

Work from Office

Data Entry work Voice and chat domestic & International bpo Hiring for Chandigarh Customer Care operations 100% Selection in bpo Walk-In Interviews SCF 19,Top Floor,Phase 11, Mohali Whats App CV 7696517846 www.callcenterjobs.anejabusinessgroup.com Perks and benefits www.callcenterjobs.anejabusinessgroup.com

Posted 1 week ago

Apply

4.0 - 7.0 years

0 - 2 Lacs

gurugram

Work from Office

Consultant- GCP Snowflake: Elevate Your Impact Through Innovation and Learning Evalueserve is a global leader in delivering innovative and sustainable solutions to a diverse range of clients, including over 30% of Fortune 500 companies. With a presence in more than 45 countries across five continents, we excel in leveraging state-of-the-art technology, artificial intelligence, and unparalleled subject matter expertise to elevate our clients' business impact and strategic decision-making. Our team of over 4, 500 talented professionals operates in countries such as India, China, Chile, Romania, the US, and Canada. Our global network also extends to emerging markets like Colombia, the Middle East, and the rest of Asia-Pacific. Recognized by Great Place to Work in India, Chile, Romania, the US, and the UK in 2022, we offer a dynamic, growth-oriented, and meritocracy-based culture that prioritizes continuous learning and skill development and work-life balance. About Data Analytics Data Analytics is one of the highest growth practices within Evalueserve , providing you rewarding career opportunities. Established in 2014, the global DA team extends beyond 1000+ (and growing) data science professionals across data engineering, business intelligence, digital marketing, advanced analytics, technology, and product engineering. Our more tenured teammates, some of whom have been with Evalueserve since it started more than 20 years ago, have enjoyed leadership opportunities in different regions of the world across our seven business lines. What you will be doing at Evalueserve Data Pipeline Development: Design and implement scalable ETL (Extract, Transform, Load) pipelines using tools like Cloud Dataflow, Apache Beam or Spark and BigQuery . Data Integration: Integrate various data sources into unified data warehouses or lakes, ensuring seamless data flow. Data Transformation: Transform raw data into analyzable formats using tools like dbt (data build tool) and Dataflow. Performance Optimization: Continuously monitor and optimize data pipelines for speed, scalability, and cost-efficiency. Data Governance: Implement data quality standards, validation checks, and anomaly detection mechanisms. Collaboration: Work closely with data scientists, analysts, and business stakeholders to align data solutions with organizational goals. Documentation: Maintain detailed documentation of workflows and adhere to coding standards. What were looking for Proficiency in **Python/ PySpark and SQL for data processing and querying. Expertise in GCP services like BigQuery , Cloud Storage, Pub/Sub, Cloud composure and Dataflow. Good knowledge of snowflake, have completed one working project in snowflake not just snowflake migration Familiarity with Datawarehouse and lake house principles and distributed data architectures. Strong problem-solving skills and the ability to handle complex projects under tight deadlines. Knowledge of data security and compliance best practices. Certification : GCP Professional Data engineer Follow us on https://www.linkedin.com/compan y/evalueserve/ Click here to learn more about what our Leaders talking on achievements AI-powered supply chain optimization solution built on Google Cloud. How Evalueserve is now Leveraging NVIDIA NIM to enhance our AI and digital transformation solutions and to accelerate AI Capabilities . Know more about ho w Evalueserve has climbed 16 places on the 50 Best Firms for Data Scientists in 2024! Want to learn more about our culture and what its like to work with us? Write to us at: careers@evalueserve.com Disclaimer : The following job description serves as an informative reference for the tasks you may be required to perform. However, it does not constitute an integral component of your employment agreement and is subject to periodic modifications to align with evolving circumstances. Please Note: We appreciate the accuracy and authenticity of the information you provide, as it plays a key role in your candidacy. As part of the Background Verification Process, we verify your employment, education, and personal details. Please ensure all information is factual and submitted on time. For any assistance, your TA SPOC is available to support you.

Posted 1 week ago

Apply

0.0 years

1 - 2 Lacs

chennai

Work from Office

Greetings! Your responsibilities include collecting and entering data in databases and maintaining accurate data of Medical Documents. - Document Splitting Process - Move the Cover Sheet - Typing Speed between 30WPM - Communication - Email Drafting

Posted 1 week ago

Apply

3.0 - 5.0 years

12 - 16 Lacs

hyderabad

Work from Office

Position : Business Analyst Shift 10 AM to 7PM IST Location – Hyderabad office Experience – 3 to 5 years Notice period – Join immediately/15 days. Tenure - 6 Months - extension based on performance Education (minimum/desirable): Graduation Degree Experience in Systems compliance preferably in legal/ contract management domain 3-5 years of overall work experience; preferred for pharma company Excellent communication skills Familiarity with quality KPIs and operational issues/management Languages: Fluent in English (spoken and written) Major Activities (Describe main activities) Incident Management: Understand requirements of Global, Regional as well as Country stakeholders for HCP Experience/ Engage program and manage day-to-day deliverables as assigned Provide operations service, especially in area of issue resolution and consultation to Clients on the needs as per the defined parameters of the project Works closely with various stakeholders and ensure timely, efficient, high-quality delivery for projects and activities Ensure consistently high level of Client satisfaction through effective delivery on Client demand and meeting all service/ business KPIs Provide update on the delivery progress to the functional/ operational managers and other stakeholders as needed Workflow Configuration: Develop specialist knowledge in the underlying operational processes and solution of HCP Experience/ Engage Program, particularly in the area of compliance, legal and HCP engagement management Understanding of the E2E contract management data flow between systems to investigate and resolve issues in a consistent manner, both in upfront and downstream process (HIP and ENGAGE) Develop expertise in the various process workflows of the countries Use process expertise to configure changes (new / updates) to the processes according to the requirements of the country and adhering to the guidelines of the product Data and Analytics: Expertise in the management and processing of data, operational proficiency in handling master data update requests Drive execution of systems compliance requirements, with the ability to translate findings back to the business via reporting, data visualization and other means of communication Solution Design: Participate in the requirements gathering discussions for new projects or reports discussions Update the requirements log Put together the solution blueprint and discuss with the stakeholder for approval Participate in the build of the report if being done via excel and the Out of Box reporting Perform data assessment by comparing need of the stakeholder with the fields available in the system and communicate gaps to the stakeholder and functional/operational managers In the case of Qlik sense build, participate in the testing of the tool before deployment to the stakeholder for review Build user guide document where necessary to guide stakeholder and users actions on the tool being built Responsible for operational execution of assigned project tasks and facilitate continuous improvement. Ensure quality control (QC) checks for the assigned projects/deliverable to meet Client expectations Follow and track key deliverable and milestones for assigned projects. Ensure accurate and timely reporting of KPI's for transparency between Client and Engage Team Training and Compliance: Comply with and support group's operational tools, standards, policies and initiatives Technical Skills: Service Now: Navigating service now MS-Excel: Using formulas in excel Create/edit and refresh pivots Build excel reports with tables and graphs Good to have knowledge of excel macros MS-PowerPoint: Knowledge of creating a slide Adding graphs and refreshing the data behind them Interested candidates share cv : busiraju.sindhu@manpower.co.in Whats app : 7013970562

Posted 1 week ago

Apply

1.0 - 5.0 years

3 - 12 Lacs

pune, maharashtra, india

On-site

Job Summary The Endorsement and Enrolment Executive will be responsible for timely and accurate processing of enrolments, endorsements, and related policy servicing tasks for health insurance clients. The role requires coordination with insurers, internal teams, and clients to ensure policy records are up-to-date and compliant with regulatory norms. Key Responsibilities Enrolment: Process member enrolments (additions, deletions, changes) in GMC/GPA/GTL policies as per client and insurer requirements. Review and validate enrolment data received from clients or internal stakeholders. Upload and maintain accurate records in internal systems and insurer portals. Maintain the active roster and CD statements for the customers Endorsement: Handle policy endorsements such as member updates, change in coverage, corrections in personal details, etc. Coordinate with insurers for endorsement issuance and follow up for endorsement letters/certificates. Ensure timely communication of endorsements to clients with updated documentation. Documentation & Reporting: Maintain and update accurate records and trackers for all enrolment and endorsement activities. Prepare periodic MIS reports and dashboards for internal use and client reporting. Stakeholder Management: Coordinate with clients, TPA (Third Party Administrators), insurers, and internal sales/service teams to resolve queries or discrepancies. Provide support during policy renewal, including reconciliation of member lists and premium calculations. Key Requirements Education: Graduate in any discipline (preferably in Commerce, Business Administration, or related fields). Experience: 1 - 5 years of relevant experience in health insurance enrolments/endorsements or policy servicing at a broker Skills Strong attention to detail and accuracy Proficiency in MS Excel and data handling Good communication and coordination skills Knowledge of health insurance processes and terminology Ability to work under deadlines and handle multiple tasks

Posted 1 week ago

Apply

1.0 - 5.0 years

3 - 12 Lacs

mumbai, maharashtra, india

On-site

Job Summary The Endorsement and Enrolment Executive will be responsible for timely and accurate processing of enrolments, endorsements, and related policy servicing tasks for health insurance clients. The role requires coordination with insurers, internal teams, and clients to ensure policy records are up-to-date and compliant with regulatory norms. Key Responsibilities Enrolment: Process member enrolments (additions, deletions, changes) in GMC/GPA/GTL policies as per client and insurer requirements. Review and validate enrolment data received from clients or internal stakeholders. Upload and maintain accurate records in internal systems and insurer portals. Maintain the active roster and CD statements for the customers Endorsement: Handle policy endorsements such as member updates, change in coverage, corrections in personal details, etc. Coordinate with insurers for endorsement issuance and follow up for endorsement letters/certificates. Ensure timely communication of endorsements to clients with updated documentation. Documentation & Reporting: Maintain and update accurate records and trackers for all enrolment and endorsement activities. Prepare periodic MIS reports and dashboards for internal use and client reporting. Stakeholder Management: Coordinate with clients, TPA (Third Party Administrators), insurers, and internal sales/service teams to resolve queries or discrepancies. Provide support during policy renewal, including reconciliation of member lists and premium calculations. Key Requirements Education: Graduate in any discipline (preferably in Commerce, Business Administration, or related fields). Experience: 1 - 5 years of relevant experience in health insurance enrolments/endorsements or policy servicing at a broker Skills Strong attention to detail and accuracy Proficiency in MS Excel and data handling Good communication and coordination skills Knowledge of health insurance processes and terminology Ability to work under deadlines and handle multiple tasks

Posted 1 week ago

Apply

1.0 - 5.0 years

3 - 12 Lacs

bengaluru, karnataka, india

On-site

Job Summary The Endorsement and Enrolment Executive will be responsible for timely and accurate processing of enrolments, endorsements, and related policy servicing tasks for health insurance clients. The role requires coordination with insurers, internal teams, and clients to ensure policy records are up-to-date and compliant with regulatory norms. Key Responsibilities Enrolment: Process member enrolments (additions, deletions, changes) in GMC/GPA/GTL policies as per client and insurer requirements. Review and validate enrolment data received from clients or internal stakeholders. Upload and maintain accurate records in internal systems and insurer portals. Maintain the active roster and CD statements for the customers Endorsement: Handle policy endorsements such as member updates, change in coverage, corrections in personal details, etc. Coordinate with insurers for endorsement issuance and follow up for endorsement letters/certificates. Ensure timely communication of endorsements to clients with updated documentation. Documentation & Reporting: Maintain and update accurate records and trackers for all enrolment and endorsement activities. Prepare periodic MIS reports and dashboards for internal use and client reporting. Stakeholder Management: Coordinate with clients, TPA (Third Party Administrators), insurers, and internal sales/service teams to resolve queries or discrepancies. Provide support during policy renewal, including reconciliation of member lists and premium calculations. Key Requirements Education: Graduate in any discipline (preferably in Commerce, Business Administration, or related fields). Experience: 1 - 5 years of relevant experience in health insurance enrolments/endorsements or policy servicing at a broker Skills Strong attention to detail and accuracy Proficiency in MS Excel and data handling Good communication and coordination skills Knowledge of health insurance processes and terminology Ability to work under deadlines and handle multiple tasks

Posted 1 week ago

Apply

5.0 - 10.0 years

25 - 40 Lacs

hyderabad, gurugram, bengaluru

Hybrid

Salary: 25 to 40 LPA Exp: 5 to 10 years Location: Bangalore/Hyderabad Notice: Immediate only..!! Key Skills: SQL, Advance SQL, BI tools, ETL etc Roles and Responsibilities Extract, manipulate, and analyze large datasets from various sources such as Hive, SQL databases, and BI tools. Develop and maintain dashboards using Tableau to provide insights on banking performance, market trends, and customer behavior. Collaborate with cross-functional teams to identify key performance indicators (KPIs) and develop data visualizations to drive business decisions. Desired Candidate Profile 6-10 years of experience in Data Analytics or related field with expertise in Banking Analytics, Business Intelligence, Campaign Analytics, Marketing Analytics, etc. . Strong proficiency in tools like Tableau for data visualization; Advance SQL knowledge preferred. Experience working with big data technologies like Hadoop ecosystem (Hive), Spark; familiarity with Python programming language required.

Posted 1 week ago

Apply

3.0 - 6.0 years

15 - 25 Lacs

pune, gurugram, bengaluru

Hybrid

Salary: 20 to 35 LPA Exp: 3 to 8 years Location: Pune/Bangalore/Gurgaon(Hybrid) Notice: Immediate only..!! Key Skills: SQL, Advance SQL, BI tools etc Roles and Responsibilities Extract, manipulate, and analyze large datasets from various sources such as Hive, SQL databases, and BI tools. Develop and maintain dashboards using Tableau to provide insights on banking performance, market trends, and customer behavior. Collaborate with cross-functional teams to identify key performance indicators (KPIs) and develop data visualizations to drive business decisions. Desired Candidate Profile 3-8 years of experience in Data Analytics or related field with expertise in Banking Analytics, Business Intelligence, Campaign Analytics, Marketing Analytics, etc. . Strong proficiency in tools like Tableau for data visualization; Advance SQL knowledge preferred. Experience working with big data technologies like Hadoop ecosystem (Hive), Spark; familiarity with Python programming language required.

Posted 1 week ago

Apply

6.0 - 8.0 years

6 - 8 Lacs

noida, uttar pradesh, india

On-site

Implementation projects for clients from various industries that process data at various levels of complexity. Translate complex functional and technical requirements into data models Setup of process data extractions including table and field mappings Estimating and modeling memory requirements for data processing Prepare and connect to On-premises/Cloud source system, extract and transform customer data, and develop process- and customer-specific studies. Solicit requirements for Business Process Mining models, including what data they will utilise and how the organisation will use them when they are built. Building accurate, reliable, and informative business process mining models will enable our company to expand even more quickly. Build the infrastructure required for optimal extraction, transformation and loading of data from disparate data sources Applying analytics and modelling will enable us to own and actively drive process improvement projects and initiatives within the relevant function. Maintaining our familiarity with the Celonis platform will require us to write documentation on its technical procedures and processes. Serving as a liaison between the data engineer, analytics platform, and business user will help with ongoing transition and reorganization. Data engineering, analytics, and visualization expertise Proven analytical abilities, including mining, assessment, analysis, and visualization; Experience with SQL, PQL ETL, and/or programming; Strong communication skills to enable contact with all levels and departments within the organization (Python, R, SAS, etc.) Skills and Qualification Education: BS/MS/BE/ME/MCA Sound knowledge and experience of Process Mining using Celonis Strong experience in programming, preferably Vertica SQL/PQL and Python Experience handling large datasets Working knowledge of data models and data structures. Technical expertise with data mining Experience with Time Series Data Ability to codify process into step by step linear commands Experience with data visualization tools such as Power BI and Tableau Professional experience writing performant SQL queries and improving existing code Experience working with relational and non-relational databases Experience performing Extract, Transform and Load (ETL) activities in Celonis Hands-on experience in designing and implementing dashboards that can provide actionable insights Proven knowledge of analytical and logical thinking Data engineering expertise or Celonis training/certification will be preferred

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies