Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 7.0 years
14 - 17 Lacs
Bengaluru
Work from Office
A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics
Posted 2 weeks ago
4.0 - 12.0 years
12 - 13 Lacs
Bengaluru
Work from Office
Job Description: Senior DBT Engineer Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai Responsibilities: - Experience Level - 4 -12 years. - Design, develop, and maintain DBT models, transformations, and SQL code to build efficient data pipelines for analytics and reporting. - Design, develop, and maintain ETL/ELT pipelines using DBT and pulling data from Snowflake. - Define and implement data modelling best practices, including data warehousing, ETL processes, and data transformations using DBT. - Build complex SQL queries within DBT to build incremental models, enhancing data processing efficiency. - Establish data governance practices and ensure data accuracy, quality, and consistency within the data transformation process. - Collaborate with data engineers, data analysts, and other stakeholders to understand and meet data requirements for various business units. - Identify and address performance bottlenecks in data transformation processes and optimize DBT models for faster query performance. - Maintain thorough documentation of DBT models, transformations, and data dictionaries to ensure transparency and accessibility to team members. - Implement data security measures to protect sensitive information and comply with data privacy regulations. - Stay updated on industry best practices and new features in DBT, and continuously improve the data transformation processes. - Provide training and support to other team members in using DBT effectively. - Implement data quality checks and validation processes to ensure data accuracy and consistency. - Hands-on experience in implementing data governance, data quality rules and validation mechanisms within Collibra is added plus. - Knowledge of workflow orchestration tools like Tidal. - Experience with Python or other scripting languages is a plus. - Familiarity with Azure cloud platforms. - Exposure to DevOps practices and CI/CD pipelines for data engineering. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is available here .
Posted 2 weeks ago
8.0 - 13.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Who we are About Stripe About the team The Reporting Platform Data Foundations group maintains and evolves the core systems that power reporting data for Stripes users. Were responsible for Aqueduct, the data ingestion and processing platform that powers core reporting data for millions of businesses on Stripe. We integrate with the latest Data Platform tooling, such as Falcon for real-time data. Our goal is to provide a robust, scalable, and efficient data infrastructure that enables clear and timely insights for Stripes users. What youll do As a Software Engineer on the Reporting Platform Data Foundations group, you will lead efforts to improve and redesign core data ingestion and processing systems that power reporting for millions of Stripe users. Youll tackle complex challenges in data management, scalability, and system architecture. Responsibilities Design and implement a new backfill model for reporting data that can handle hundreds of millions of row additions and updates efficiently Revamp the end-to-end experience for product teams adding or changing API-backed datasets, improving ergonomics and clarity Enhance the Aqueduct Dependency Resolver system, responsible for determining what critical data to update for Stripe s users based on events. Areas include error management, observability, and delegation of issue resolution to product teams Lead integration with the latest Data Platform tooling, such as Falcon for real-time data, while managing deprecation of older systems Implement and improve data warehouse management practices, ensuring data freshness and reliability Collaborate with product teams to understand their reporting needs and data requirements Design and implement scalable solutions for data ingestion, processing, and storage Onboard, spin up, and mentor engineers, and set the group s technical direction and strategy Who you are Were looking for someone who meets the minimum requirements to be considered for the role. If you meet these requirements, you are encouraged to apply. The preferred qualifications are a bonus, not a requirement. Minimum requirements 8+ years of professional experience writing high quality production level code or software programs. Extensive experience in designing and implementing large-scale data processing systems Strong background in distributed systems and data pipeline architectures Proficiency in at least one modern programming language (e.g., Go, Java, Python, Scala) Experience with big data technologies (e.g., Hadoop, Flink, Spark, Kafka, Pinot, Trino, Iceberg) Solid understanding of data modeling and database systems Excellent problem-solving skills and ability to tackle complex technical challenges Strong communication skills and ability to work effectively with cross-functional teams Experience mentoring other engineers and driving technical initiatives Preferred qualifications Experience with real-time data processing and streaming systems Knowledge of data warehouse technologies and best practices Experience in migrating legacy systems to modern architectures Contributions to open-source projects or technical communities Office-assigned Stripes in most of our locations are currently expected to spend at least 50% of the time in a given month in their local office or with users. This expectation may vary depending on role, team and location. For example, Stripes in Stripe Delivery Center roles in Mexico City, Mexico and Bengaluru, India work 100% from the office. Also, some teams have greater in-office attendance requirements, to appropriately support our users and workflows, which the hiring manager will discuss. This approach helps strike a balance between bringing people together for in-person collaboration and learning from each other, while supporting flexibility when possible. Team Data Platform Job type Full time
Posted 2 weeks ago
6.0 - 7.0 years
4 - 8 Lacs
Gurugram
Work from Office
Tasks & Responsibility Description of tasks: - Commercial responsibility within his/her relation towards the team leader - Daily handling of business cases in the area of sea cargo import shipments and delivering of shipments in accordance with the procedures for dispatching and delivering - Contacts clients, agents and shipping companies, other freight forwarding companies and customs bodies in connection with dispatching and delivering of shipments - Issues and monitors transportation documents, collects documents for dispatching and delivering of shipments - Electronic data processing - Coverage of insurance (temporarily or permanently) - Composes records about damages and deficits of shipments and complaint orders - Issues invoices of the accumulated expenses respectively transferring to the person in charge for invoicing - Filing of business cases - Makes offers to customers and partners - Enters the data of new customers, partners and service provider and updates existing ones - Generate sales leads - Customer service, keeps contact with agents - Compiles monthly bordero report for her/his relation - Knowledge of the standard operation procedures/guidelines and systems like AS 400, S.P.O.T., LogSpace Qualification and skills Level of Education: commercial education or special education in freight forwarding Working Experience: At least 6-7 years in Sea Cargo. Special Knowledge: Computer basic knowledge, MS Office English language Personal Qualification: Team player Dynamic Commercial thinking Initiative Responsible Company Introduction: For over 40 years, cargo-partner has flourished in the logistics industry, delivering unparalleled service to our clients worldwide. We have now embarked on another journey and to continue our commitment for excellence, we have now joined the Nippon Express Group which will now underpin all the values we constantly aspire to achieve, now becoming a top 5 global player. As an end to end info logistics provider, we pride ourselves on offering a comprehensive portfolio of air, sea, land transport, and warehousing services. With a unique focus on information technology and supply chain optimization, we empower businesses to thrive in todays fast-paced world. Join our dynamic team, where innovation meets passion and every voice is valued. Embark on a journey where your skills are nurtured, creativity is celebrated, and together, we take pride in making a difference. Discover more about our Mission & Vision . Dive into a world of endless opportunities and embark on the cargo-partner journey with us. cargo-partner is an equal opportunity employer. We celebrate diversity and are committed to creating an environment where all employees feel valued and respected. We do not discriminate on the basis of race, color, religion, gender, sexual orientation, gender identity, national origin, age, disability, or any other legally protected characteristics. We welcome and encourage applications from all individuals, regardless of background. Explore endless opportunities and leave your mark with us. #JoinUs #Logistics #workingdigital #Teamwork #cargopartner #wow Ready to get things moving? Join our team! Learn about Life at cargo-partner here . View our Privacy Policy .
Posted 2 weeks ago
4.0 - 7.0 years
14 - 17 Lacs
Gurugram
Work from Office
A Data Engineer specializing in enterprise data platforms, experienced in building, managing, and optimizing data pipelines for large-scale environments. Having expertise in big data technologies, distributed computing, data ingestion, and transformation frameworks. Proficient in Apache Spark, PySpark, Kafka, and Iceberg tables, and understand how to design and implement scalable, high-performance data processing solutions.What you’ll doAs a Data Engineer – Data Platform Services, responsibilities include: Data Ingestion & Processing Designing and developing data pipelines to migrate workloads from IIAS to Cloudera Data Lake. Implementing streaming and batch data ingestion frameworks using Kafka, Apache Spark (PySpark). Working with IBM CDC and Universal Data Mover to manage data replication and movement. Big Data & Data Lakehouse Management Implementing Apache Iceberg tables for efficient data storage and retrieval. Managing distributed data processing with Cloudera Data Platform (CDP). Ensuring data lineage, cataloging, and governance for compliance with Bank/regulatory policies. Optimization & Performance Tuning Optimizing Spark and PySpark jobs for performance and scalability. Implementing data partitioning, indexing, and caching to enhance query performance. Monitoring and troubleshooting pipeline failures and performance bottlenecks. Security & Compliance Ensuring secure data access, encryption, and masking using Thales CipherTrust. Implementing role-based access controls (RBAC) and data governance policies. Supporting metadata management and data quality initiatives. Collaboration & Automation Working closely with Data Scientists, Analysts, and DevOps teams to integrate data solutions. Automating data workflows using Airflow and implementing CI/CD pipelines with GitLab and Sonatype Nexus. Supporting Denodo-based data virtualization for seamless data access Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4-7 years of experience in big data engineering, data integration, and distributed computing. Strong skills in Apache Spark, PySpark, Kafka, SQL, and Cloudera Data Platform (CDP). Proficiency in Python or Scala for data processing. Experience with data pipeline orchestration tools (Apache Airflow, Stonebranch UDM). Understanding of data security, encryption, and compliance frameworks Preferred technical and professional experience Experience in banking or financial services data platforms. Exposure to Denodo for data virtualization and DGraph for graph-based insights. Familiarity with cloud data platforms (AWS, Azure, GCP). Certifications in Cloudera Data Engineering, IBM Data Engineering, or AWS Data Analytics
Posted 2 weeks ago
2.0 - 4.0 years
4 - 8 Lacs
Bengaluru
Work from Office
Ingest new data from relational and non-relational source database systems into our warehouse. Connect data from various sources. Integrate data from external sources to warehouse by building facts and dimensions based on the EPM data model requirements. Automate data exchange and processing through serverless data pipelines. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience in data analysis and integration. Experience in data building and consuming fact and dimension tables. Experience in automating data integration through data pipelines. Experience with object-oriented programing languages such as Python. Experience with structured data processing languages such as SQL and Spark SQL. Experience with REST APIs and JSON Experience in IBM Cloud data processing services such as IBM Code Engine, IBM Event Streams (Apache Kafka). Strong understanding of Datawarehouse concepts and various data warehouse architectures Preferred technical and professional experience Experience with IBM Cloud architecture Experience with DevOps. Knowledge of Agile development methodologies Experience with building containerized applications and running them in serverless environments on the Cloud such as IBM Code Engine, Kubernetes, or Satellite. Experience with IBM Cognitive Enterprise Data Platform and CodeHub. Experience with data integration tools such as IBM DataStage or Informatica
Posted 2 weeks ago
5.0 - 10.0 years
14 - 16 Lacs
Bengaluru
Work from Office
We are looking for a highly skilled Lead Data Engineer with expertise in Azure or AWS and Databricks to join our team. The ideal candidate will lead the design, development, and implementation of data engineering solutions, ensuring scalability, security, and efficiency in our data infrastructure. This role requires strong technical skills, and experience in managing large-scale data processing pipelines. Key Responsibilities: Lead the design and development of scalable and reliable data pipelines using Azure Data Services or AWS Data Services and Databricks. Architect, implement, and optimize ETL/ELT processes to process large volumes of structured and unstructured data. Develop and maintain data models, data lakes, and data warehouses to support analytics and business intelligence needs. Collaborate with data scientists, analysts, and business stakeholders to ensure data availability and integrity. Implement and enforce data governance, security, and compliance best practices. Optimize and monitor performance of data processing frameworks (Spark, Databricks, etc.). Automate and orchestrate data workflows using tools such as Apache Airflow, Azure Data Factory, AWS Step Functions, or Glue. Guide and mentor junior data engineers in best practices and modern data engineering techniques. Mandatory Qualifications: 5+ years of experience in data engineering Strong expertise in Azure Data Services (Azure Data Lake, Azure Synapse, Azure Data Factory) or AWS Data Services (S3, Redshift, Glue, Lambda, Step Functions, EMR). Proficiency in Databricks and experience with Apache Spark for large-scale data processing. Strong programming skills in Python Experience working with SQL and NoSQL databases such as PostgreSQL, MySQL, DynamoDB, or CosmosDB. Solid understanding of data governance, security, and compliance (GDPR, HIPAA, etc.) is a plus. Experience with real-time streaming technologies such as Kafka, Kinesis, or Event Hubs is a plus. Excellent problem-solving skills and the ability to work in a fast-paced, agile environment. Good to have Qualifications: Experience with machine learning pipelines and MLOps. Familiarity with data visualization and BI tools like Power BI, Tableau, or Looker. Strong communication and leadership skills to drive best practices across the team.
Posted 2 weeks ago
12.0 - 17.0 years
45 - 55 Lacs
Hyderabad
Work from Office
Some careers shine brighter than others If you re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. Analytics Foundations Enabler IT team provides the required IT platform for the model developers to develop / train models and eventually deploy them in an automated way into production. Analytics Foundations Enabler IT team ensures these models are packaged such that they are exposed as Model as a Service to be consumed by various business functions as part of their data driven decisioning use cases. We are seeking a talented and experienced POD Lead to join our dynamic team, with experience in software development and a strong background in Python, GCP, Angular, and Kubernetes. The ideal candidate will have a proven track record of technical leadership, stakeholder management, and excellent communication skills. This role will involve working closely with cross-functional teams to deliver high-quality software solutions while driving innovation and continuous improvement. In this role, you will: Lead and manage a team of software engineers, providing technical guidance, mentorship, and support to ensure the successful delivery of software projects. Collaborate with product managers, architects, and other stakeholders to define and prioritize software requirements, ensuring alignment with business objectives. Conceptualise, design, develop and reuse effective engineering design, patterns & frameworks using Python, GCP, Angular, and Kubernetes, adhering to best practices and industry standards. Foster a culture of continuous improvement, encouraging the team to identify and implement process improvements and innovative solutions. Act as an IT Service Owner and ensure compliance across Incident, Problem, Change/Release management and other associated IT controls Ensure service resilience, service sustainability and recovery time objectives are met for all the software solutions delivered. Drive operational, delivery and engineering excellence across the pod teams. Be accountable for production and for delivery. Requirements To be successful in this role, you should meet the following requirements: 12+ years of experience in software development, with a strong background in Python, Java Springboot, GCP, Angular, and Kubernetes, awareness of Model Life Cycle Management & MLOPs will be a plus. Proven experience in technical leadership, managing software development teams, and delivering complex software projects. Excellent stakeholder management and communication skills, with the ability to effectively convey complex technical concepts to both technical and non-technical audiences. Software engineering skills: Microservice architecture patterns, frameworks like FastAPI, REST APIs and experience around API Security Standards, API Gateway, Service Mesh Devops skills: Proficiency in tools such as Docker, Kubernetes, Helm, Terraform Orchestrating data pipelines: Setting up and automating data pipelines using tools such as Airflow and familiarity with data processing technologies including NumPy, Pandas, Amazon S3, Kubeflow, Dataflow Expertise in monitoring and observability technologies like Prometheus, Appdynamics, Splunk, Jaeger, Kiali, Open Telemetry. GCP Experience around management of GKE clusters, Good to have skills: Programming, working knowledge of machine learning algorithms and frameworks, like scikit learn, PyTorch; Familiarity on industry solutions like Google Vertex AI Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website.
Posted 2 weeks ago
5.0 - 10.0 years
22 - 27 Lacs
Bengaluru
Work from Office
Amazon s Spectrum Analytics team is looking for a Business Intelligence Engineer to help build the next generation of analytics solutions for Selling Partner Developer Services. This is an opportunity to get in on the ground floor as we transform from a reactive, request-directed team to a proactive, roadmap-driven organization that accelerates the business. We need someone who is passionate about data and the insights that large amounts of data can provide. In addition to broad experience with data technologies from ingestion to visualization and consumption (e.g. data pipelines, ETL, reporting and dashboarding), the ideal candidate will have strong analysis skills and an insatiable curiosity to answer the question "why?". You will also be able to articulate the story the data is telling with compelling verbal and written communication. Deliver minimally to moderately complex data analysis; collaborating as needed with Data Science as complexity increases. Development of dashboards and reports. Development of minimally to moderately complex data processing jobs using appropriate technologies (e.g. SQL, Python, Spark, AWS Lambda, etc.), collaborating with Data Engineers as needed. Collaborate with stakeholders to understand business domains, requirements, and expectations. Additionally, working with owners of data source systems to understand capabilities and limitations. Manage the deliverables of projects, anticipate risks and resolve issues. Adopt best practices in reporting and analysis: data integrity, test design, analysis, validation, and documentation. About the team Spectrum offers a world-class suite of data products and experiences to empower the creation of innovative solutions on behalf of Partners. Our foundational systems and tools solve for cross-cutting Builder needs in externalizing data, and are easily extensible using federated policy and reusable technology. Experience with data visualization using Tableau, Quicksight, or similar tools Experience with data modeling, warehousing and building ETL pipelines Experience using SQL to pull data from a database or data warehouse and scripting experience (Python) to process data for modeling 5+ years of relevant professional experience in business intelligence, analytics, statistics, data engineering, data science or related field. Demonstrated data analysis and visualization skills. Highly proficient with SQL. Knowledge of AWS products such as Redshift, Quicksight, and Lambda. Excellent verbal/written communication & data presentation skills; ability to succinctly summarize key findings and effectively communicate with both business and technical teams. Experience in data mining, ETL, etc. and using databases in a business environment with large-scale, complex datasets
Posted 2 weeks ago
1.0 - 3.0 years
2 - 5 Lacs
Chennai
Work from Office
Mandatory Skills: AWS, Python, SQL, spark, Airflow, SnowflakeResponsibilities Create and manage cloud resources in AWS Data ingestion from different data sources which exposes data using different technologies, such asRDBMS, REST HTTP API, flat files, Streams, and Time series data based on various proprietary systems. Implement data ingestion and processing with the help of Big Data technologies Data processing/transformation using various technologies such as Spark and Cloud Services. You will need to understand your part of business logic and implement it using the language supported by the base data platform Develop automated data quality check to make sure right data enters the platform and verifying the results of the calculations Develop an infrastructure to collect, transform, combine and publish/distribute customer data. Define process improvement opportunities to optimize data collection, insights and displays. Ensure data and results are accessible, scalable, efficient, accurate, complete and flexible Identify and interpret trends and patterns from complex data sets Construct a framework utilizing data visualization tools and techniques to present consolidated analytical and actionable results to relevant stakeholders. Key participant in regular Scrum ceremonies with the agile teams Proficient at developing queries, writing reports and presenting findings Mentor junior members and bring best industry practices
Posted 2 weeks ago
0.0 - 1.0 years
1 - 1 Lacs
Hyderabad
Work from Office
Candidate should be proficient in MS office i.e Excel and Google Sheets. Should be familiar with browsing, social media sites and news Ability to meet deadlines Self Motivated and Positive attitude Should be able to perform without supervision
Posted 2 weeks ago
0.0 years
1 - 3 Lacs
Ahmedabad
Work from Office
Ready to shape the future of work? At Genpact, we don't just adapt to change we drive it. AI and digital innovation are redefining industries and were leading the charge. Genpacts AI Gigafactory, our industry-first accelerator, is an example of how were scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team thats shaping the future, this is your moment Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook. Mega Virtual Drive for Customer Service roles -English+ Hindi Language on 11th June 2025 (Wednesday) || Ahmedabad Location Date: 11-June-2025 (Wednesday) MS Teams meeting ID: 467 351 668 256 9 MS Teams Passcode: Mh2qs6y3 Time: 12:00 PM - 1:00 PM Job Location: Ahmedabad (Work from office) Languages Known: Hindi + English Shifts: Flexible with any shift Responsibilities • Respond to customer queries and customer's concern • Provide support for data collection to enable Recovery of the account for end user. • Maintain a deep understanding of client process and policies • Reproduce customer issues and escalate product bugs • Provide excellent customer service to our customers • You should be responsible to exhibit capacity for critical thinking and analysis. • Responsible to showcase proven work ethic, with the ability to work well both independently and within the context of a larger collaborative environment Qualifications we seek in you Minimum qualifications • Graduate (Any Discipline except law) • Only Freshers are eligible • Fluency in English & Hindi language is mandatory Preferred qualifications • Effective probing skills and analyzing / understanding skills • Analytical skills with customer centric approach • Excellent proficiency with written English and with neutral English accent • You should be able to work on a flexible schedule (including weekend shift) Why join Genpact? Be a transformation leader Work at the cutting edge of AI, automation, and digital innovation Make an impact Drive change for global enterprises and solve business challenges that matter Accelerate your career Get hands-on experience, mentorship, and continuous learning opportunities Work with the best Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Lets build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. **Note: Please keep your E-Aadhar card handy while appearing for interview.
Posted 2 weeks ago
0.0 - 1.0 years
0 Lacs
Bengaluru
Work from Office
About the Role: We are looking for a motivated and detail-oriented Purchase Specialist Intern to join our procurement team. This role is ideal for someone looking to gain hands-on experience in purchasing, vendor coordination, and supply chain processes in a fast-paced technical environment. Role & responsibilities : Assist the Purchase team in raising purchase orders and managing procurement records. Coordinate with vendors and internal teams for order tracking and delivery follow-ups. Maintain accurate data in Excel sheets (price lists, vendor databases, delivery logs). Support document management, filing, and basic data entry tasks. Perform general administrative and support duties in procurement-related functions. Requirements: Basic computer knowledge (MS Office, email handling, file management). Proficiency in Microsoft Excel (data entry, sorting, basic formulas, etc.). Good communication skills and willingness to learn. Ability to work independently and as part of a team. Any graduate
Posted 2 weeks ago
4.0 - 5.0 years
1 - 3 Lacs
Ahmedabad, Thaltej
Work from Office
Roles and Responsibilities for Day Shift Timings: The day shift will be 9 hours between 7 AM and 9 PM, including a 1-hour lunch break. We are looking for a motivated and organized Team Lead to oversee our Data Entry & Processing Operations team. In this role, you will be responsible for supervising daily workflows, ensuring data quality and turnaround time, and helping your team grow and perform at their best. Key Responsibilities Lead and manage the Data Processing team responsible for validating land/property-related records for client banks. Plan and allocate daily tasks and monitor the accuracy, speed, and quality of outputs. Conduct regular reviews to track performance, identify challenges, and resolve them in collaboration with other teams. Oversee the Quality Check (QC) process, providing constructive feedback and guidance to improve results. Prepare daily, weekly, and monthly reports on productivity and accuracy. Document operational procedures, recurring issues, and process improvements. Act as the point of contact for coordination with management and other internal departments. Key Skills & Competencies Strong team leadership and people management skills. Excellent organizational and time management abilities. Problem-solving mindset with attention to detail. Hands-on experience with QC processes is an advantage. Ability to identify process gaps and drive continuous improvement. Proficient in Gujarati; working knowledge of English is preferred. Basic digital literacy (Excel, dashboard tools, internal workflow tools). Qualifications Bachelors degree in any discipline (or equivalent work experience). 2 to 4 years of experience in data processing or operational roles. Minimum 1 year of experience in a leadership or supervisory position. Experience in BFSI, real estate, or document-based workflows is a plus. Proficiency in Microsoft Excel or Google Sheets. Basic understanding of image editing software (online tools). Good attention to detail. Ability to manage time effectively and work on multiple tasks.
Posted 2 weeks ago
2.0 - 5.0 years
5 - 12 Lacs
Chennai
Hybrid
48 years of relevant experience in marketing claims review, administration, or product data management. Strong analytical skills with attention to detail. Proficiency in managing data within product information systems (PIM, ERP, or other relevant tools). Familiarity with marketing compliance standards and regulatory requirements. Excellent communication and collaboration skills. Ability to multitask and thrive in a fast-paced environment. Marketing Claims Analyst/Admin: Review and validate marketing claims for accuracy, compliance, and regulatory alignment. Collaborate with legal, compliance, and marketing teams to ensure claims meet industry standards. Maintain documentation and audit records related to marketing claims. Support administrative tasks, including data entry, reporting, and tracking claim approvals. Assist in updating marketing content across various platforms based on validated claims.
Posted 2 weeks ago
0.0 - 5.0 years
2 - 4 Lacs
Vijayawada, Visakhapatnam, Hyderabad
Hybrid
PERMANENT WORK FROM HOME 2025 graduate can also apply An Urgent Requirement For graduates and under graduates for Data Entry Sal 10 to 35k take home Required Age 18 to 35 Years Full Time Easy Selection Process Please apply for the job in Naukri.com. We will check & will update you. Do not search the number in Google and do not call us.
Posted 2 weeks ago
0.0 - 5.0 years
2 - 4 Lacs
Chennai, Coimbatore, Bengaluru
Hybrid
PERMANENT WORK FROM HOME 2025 graduate can also apply An Urgent Requirement For graduates and under graduates for Data Entry Sal 10 to 35k take home Required Age 18 to 35 Years Full Time Easy Selection Process Please apply for the job in Naukri.com. We will check & will update you. Do not search the number in Google and do not call us.
Posted 2 weeks ago
3.0 - 6.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Data Engineering Pipeline Development Design implement and maintain ETL processes using ADF and ADB Create and manage views in ADB and SQL for efficient data access Optimize SQL queries for large datasets and high performance Conduct end-to-end testing and impact analysis on data pipelines Optimization Performance Tuning Identify and resolve bottlenecks in data processing Optimize SQL queries and Delta Tables for fast data processing Data Sharing Integration Implement Delta Share, SQL Endpoints, and other data sharing methods Use Delta Tables for efficient data sharing and processing API Integration Development Integrate external systems through Databricks Notebooks and build scalable solutions Experience in building APIs (Good to have) Collaboration Documentation Collaborate with teams to understand requirements and design solutions Provide documentation for data processes and architectures
Posted 2 weeks ago
4.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
PySpark Python SQL Strong focus on big data processing which is core to data engineering AWS Cloud Services Lambda Glue S3 IAMIndicates working with cloud based data pipelines Airflow GitHub Essential for orchestration and version control in data workflows
Posted 2 weeks ago
4.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
PySpark, Python, SQL Strong focus on big data processing,which is core to data engineering. AWS Cloud Services (Lambda, Glue, S3, IAM) Indicates working with cloud-based data pipelines. Airflow, GitHub Essential for orchestration and version control in data workflows.
Posted 2 weeks ago
0.0 years
1 - 5 Lacs
Hyderabad, Bengaluru, Delhi / NCR
Work from Office
Provide information to job seekers on WhatsApp. Pass lead to recruitment team for qualified leads - in a professional and timely manner. Work From Home Required Candidate profile Immediate Joiner Work From Home Candidate should be from Hyderabad, New Delhi, Mumbai, Pune, Bangalore,
Posted 2 weeks ago
1.0 - 6.0 years
0 - 3 Lacs
Pimpri-Chinchwad, Pune, Talegaon-Dabhade
Work from Office
Data Entry Operator (Excel Expert) Company - Adecco's Client MNC - Logistics Company, Payroll - Adecco Position - Data Entry Operator Location Pune , Chakan (No transport , No Canteen) 3 shifts working Education - Bachelor's degree and above. Role - Knowledges must be required as per the below Points: Qualification Minimum graduate data entry Operator Mail communication skill Word, Excel, Vlookup, Hlookup, Pivot Daily basis billing line tracking & monitoring Must Excel work & reports analysis. If Interested please share below details- Are you having skills in Excel - Present Salary- Expected salary - Notice period - Can join immediately - Are you ready for 3rd party payroll - Chakan is comfortable ? - Please share Cv with above details to nandini.belhekar@adecco.com . Please call back to 9890451769
Posted 2 weeks ago
0.0 years
1 - 3 Lacs
Ahmedabad
Work from Office
Ready to shape the future of work? At Genpact, we don't just adapt to change we drive it. AI and digital innovation are redefining industries and were leading the charge. Genpacts AI Gigafactory, our industry-first accelerator, is an example of how were scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team thats shaping the future, this is your moment Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn , X , YouTube , and Facebook. Mega Virtual Drive for Customer Service roles -English+ Hindi Language on 9th June 2025 (Monday) || Ahmedabad Location Date: 9-June-2025 (Monday) MS Teams meeting ID: 453 803 303 617 5 MS Teams Passcode: bp6dM39x Time: 12:00 PM - 1:00 PM Job Location: Ahmedabad (Work from office) Languages Known: Hindi+English Shifts: Flexible with any shift Responsibilities • Respond to customer queries and customer's concern • Provide support for data collection to enable Recovery of the account for end user. • Maintain a deep understanding of client process and policies • Reproduce customer issues and escalate product bugs • Provide excellent customer service to our customers • You should be responsible to exhibit capacity for critical thinking and analysis. • Responsible to showcase proven work ethic, with the ability to work well both independently and within the context of a larger collaborative environment Qualifications we seek in you Minimum qualifications • Graduate (Any Discipline except law) • Only Freshers are eligible • Fluency in English & Hindi language is mandatory Preferred qualifications • Effective probing skills and analyzing / understanding skills • Analytical skills with customer centric approach • Excellent proficiency with written English and with neutral English accent • You should be able to work on a flexible schedule (including weekend shift) Why join Genpact? Be a transformation leader Work at the cutting edge of AI, automation, and digital innovation Make an impact Drive change for global enterprises and solve business challenges that matter Accelerate your career Get hands-on experience, mentorship, and continuous learning opportunities Work with the best Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Lets build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. **Note: Please keep your E-Aadhar card handy while appearing for interview.
Posted 2 weeks ago
3.0 - 5.0 years
45 - 50 Lacs
Bengaluru
Work from Office
Cohesity offers a web-scale, hybrid cloud infrastructure for next-gen data management as a service. We are looking for Backend Software Engineers who are motivated and passionate about working on features, tools, and scripts that will improve the ability to sell, deploy and maintain Cohesity products. Our Software Engineers not only design and implement features but also diagnose problems in large bodies of sophisticated code, understand scalability and performance, and work on fixes with a rapid turnaround time and emphasis on high quality. We need experienced and outstanding engineers who strive to build high-quality distributed systems and solve complex problems. This is an outstanding opportunity to join our Cohesity team in a period of fast growth and expansion. If you are interested in working in an environment where you can make an impact toward the future of cloud-based data management solutions, then Cohesity is the place for you. HOW you'll SPEND YOUR TIME HERE: Develop system for Kubernetes cluster provisioning, management, and orchestration (on-prem and cloud). Handle lifecycle management for containerized workloads including deployment, scaling, monitoring, and cleanup. Handle multi-node etcd clusters, ensuring high availability, stability, and backup/restore processes. Implement observability tools, and infrastructure automation (eg, Helm, Prometheus, OTEL etc). Troubleshoot and resolve issues related to Kubernetes architecture, networking, storage, and control plane components. Collaborate with DevOps, SRE, and application teams to support platform reliability and performance. we'd LOVE TO TALK TO YOU IF YOU HAVE MANY OF THE FOLLOWING: 3-5 years of experience in software development or platform engineering roles. Strong hands-on experience with Kubernetes Architecture (administration and workload management). Proficiency in one or more programming languages (eg, Go, C++). Experience with containerization (Docker) and container orchestration. Solid understanding of clustering model, and backup/restore techniques. Experience with AWS and on-prem Kubernetes cluster setups. Familiarity with monitoring/logging tools like Prometheus, Grafana, Fluentd, Loki, etc Experience with Cluster API or custom Kubernetes controllers/operators. Exposure to GitOps practices and tools. Knowledge of Linux internals and networking fundamentals. Certified Kubernetes Administrator (CKA) is a plus.
Posted 2 weeks ago
6.0 - 10.0 years
45 - 50 Lacs
Bengaluru
Work from Office
We are looking for Software Engineers who are motivated and hardworking and strive to improve Cohesity s Products and by working on features, tools, scripts that will make them easy to sell, deploy and maintain. You are not only a Software Engineer who crafts and implements features but should have a curiosity about diagnosing problems in large bodies of complex code, Also is able to comprehend scalability and performance and work on fixes with rapid turnaround time and high-quality results. Along with being part of our Product and Sustenance Engineering team, you will also be collaborating with Product Managers and more importantly with Customer Support, System Engineers and Customers. HOW you'll SPEND YOUR TIME HERE: We are looking for engineers who have deep experience in optimising and scaling enterprise search systems like Elastic Search. You would tweak and extend the limits of search systems beyond their default capabilities. You should also have an eye for SOLID design and a knack for resolving defects. In this role, you will design, develop and test innovative solutions using GenAI technologies. You will productively work in a highly collaborative agile team, actively participate in knowledge sharing all while communicating across teams in a multinational environment. we'd LOVE TO TALK TO YOU IF YOU HAVE MANY OF THE FOLLOWING: - MS/BS in Computer Science/Computer Engineering or related field of study with 6-10 years of relevant experience. - Strong expertise in Elastic Search Optimization, indexing, aggregation and emerging standards and engineering best practices in this area. - Strong coding experience in Object Oriented Programming language. Mastered the fundamentals of programming and debugging skills in Go, Python.
Posted 2 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20183 Jobs | Dublin
Wipro
10025 Jobs | Bengaluru
EY
8024 Jobs | London
Accenture in India
6531 Jobs | Dublin 2
Amazon
6260 Jobs | Seattle,WA
Uplers
6244 Jobs | Ahmedabad
Oracle
5916 Jobs | Redwood City
IBM
5765 Jobs | Armonk
Capgemini
3771 Jobs | Paris,France
Tata Consultancy Services
3728 Jobs | Thane