Jobs
Interviews

1541 Talend Jobs - Page 38

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

7.0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

Key Responsibilities • Lead and mentor a high-performing data pod composed of data engineers, data analysts, and BI developers. • Design, implement, and maintain ETL pipelines and data workflows to support real-time and batch processing. • Architect and optimize data warehouses for scale, performance, and security. • Perform advanced data analysis and modeling to extract insights and support business decisions. • Lead data science initiatives including predictive modeling, NLP, and statistical analysis. • Manage and tune relational and non-relational databases (SQL, NoSQL) for availability and performance. • Develop Power BI dashboards and reports for stakeholders across departments. • Ensure data quality, integrity, and compliance with data governance and security standards. • Work with cross-functional teams (product, marketing, ops) to turn data into strategy. Qualifications Required: • PhD in Data Science, Computer Science, Engineering, Mathematics, or related field. • 7+ years of hands-on experience across data engineering, data science, analysis, and database administration. • Strong experience with ETL tools (e.g., Airflow, Talend, SSIS) and data warehouses (e.g., Snowflake, Redshift, BigQuery). • Proficient in SQL, Python, and Power BI. • Familiarity with modern cloud data platforms (AWS/GCP/Azure). • Strong understanding of data modeling, data governance, and MLOps practices. • Exceptional ability to translate business needs into scalable data solutions.

Posted 1 month ago

Apply

6.0 years

0 Lacs

India

Remote

ob Title: Data Engineer Location: [ Remote] Job Type: Full-Time Experience: [2–6 Years / Mid-Senior Level / As per requirement] Salary: [As per industry standards] Job Summary: We are seeking a skilled and motivated Data Engineer to design, build, and maintain scalable data pipelines and architectures. The ideal candidate will work closely with data scientists, analysts, and business stakeholders to ensure efficient data collection, transformation, and access across the organization. Key Responsibilities: Design, construct, install, and maintain scalable data pipelines and architectures. Build and optimize ETL processes for data integration from a wide variety of sources. Develop and maintain datasets that are ready for business intelligence and machine learning use cases. Work with structured and unstructured data and integrate data from multiple sources and APIs. Ensure data quality, consistency, and availability across internal systems. Monitor and troubleshoot performance issues in data workflows. Collaborate with data scientists, analysts, and engineers to understand data needs and deliver appropriate solutions. Maintain and enhance data warehouse/lake infrastructure (e.g., Redshift, BigQuery, Snowflake, or similar). Implement best practices for data governance, security, and compliance. Required Skills & Qualifications: Bachelor’s or Master’s degree in Computer Science, Engineering, Information Systems, or related field. Proven experience (2–6 years) as a Data Engineer or in a similar role. Proficiency in SQL and experience with relational databases (e.g., PostgreSQL, MySQL, MS SQL Server). Strong programming skills in Python, Java, or Scala. Experience with ETL tools (e.g., Apache Airflow, Talend, dbt, Informatica). Familiarity with big data tools (e.g., Spark, Hadoop) and cloud platforms (AWS, Azure, GCP). Experience with data warehousing solutions (e.g., Snowflake, BigQuery, Redshift). Knowledge of data modeling, schema design, and version control systems (e.g., Git). Understanding of data privacy, security regulations, and best practices. Preferred Qualifications: Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Familiarity with real-time data processing tools (e.g., Kafka, Flink). Hands-on experience with DevOps for data workflows (CI/CD for data pipelines). Certifications in cloud platforms or data engineering tools. Benefits: Competitive salary package Health insurance and wellness benefits Flexible working hours and remote work opportunities Career development and upskilling support Collaborative and inclusive work culture

Posted 1 month ago

Apply

3.0 years

0 Lacs

India

Remote

Title: Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description.

Posted 1 month ago

Apply

0 years

0 Lacs

Greater Kolkata Area

On-site

Description We are seeking a highly motivated ETL Data Engineer to join our dynamic data team. In this role, you will play a pivotal part in our data pipeline initiatives, where your expertise in ETL processes will be essential for transforming raw data into actionable insights. You will work closely with data analysts, data scientists, and other stakeholders to understand their data requirements and ensure that data is made accessible in a meaningful way. Your proficiency in designing and implementing robust ETL solutions will enable the organization to maintain high data quality and availability, facilitating key business decisions. About As an ETL Data Engineer, you will leverage your technical skills to develop data workflows, optimize data transformation processes, and troubleshoot data issues as they arise. You will also be responsible for ensuring compliance with data governance policies while utilizing best practices in data engineering. If you are passionate about data management and enjoy working in a fast-paced, collaborative environment, this opportunity is perfect for you to contribute significantly to our data initiatives and to grow your career within our : Design and develop ETL processes to facilitate data extraction, transformation, and loading from various sources. Collaborate with data analysts and business stakeholders to understand data requirements and translate them into technical specifications. Ensure data quality and integrity through monitoring and validation of ETL processes and workflows. Optimize performance of existing ETL workflows and data pipelines to improve efficiency and reduce processing time. Implement data governance practices to maintain compliance with industry regulations and internal policies. Maintain and support ETL tools and frameworks, ensuring systems are running smoothly and efficiently. Document data processes and standards, providing training and support to team members as : Bachelor's degree in Computer Science, Information Technology, or a related field. Proven experience as an ETL Data Engineer or similar role in data engineering. Strong proficiency in ETL tools such as Apache NiFi, Talend, Informatica, or similar technologies. Experience with databases such as SQL Server, Oracle, MySQL, or PostgreSQL and knowledge of SQL scripting. Familiarity with cloud platforms like AWS, Azure, or Google Cloud for data warehousing solutions. Understanding of data modeling concepts and experience with data architecture. Ability to work collaboratively in a team environment and communicate effectively with both technical and non-technical stakeholders. (ref:hirist.tech)

Posted 1 month ago

Apply

1.0 - 5.0 years

7 - 8 Lacs

Bengaluru

Work from Office

Diverse Lynx is looking for Snowflake Developer to join our dynamic team and embark on a rewarding career journey A Snowflake Developer is responsible for designing and developing data solutions within the Snowflake cloud data platform They play a critical role in helping organizations to store, process, and analyze their data effectively and efficiently Responsibilities:Design and develop data solutions within the Snowflake cloud data platform, including data warehousing, data lake, and data modeling solutionsParticipate in the design and implementation of data migration strategiesEnsure the quality of custom solutions through the implementation of appropriate testing and debugging proceduresProvide technical support and troubleshoot issues as neededStay up-to-date with the latest developments in the Snowflake platform and data warehousing technologiesContribute to the ongoing improvement of development processes and best practices Requirements:Experience in data warehousing and data analyticsStrong knowledge of SQL and data warehousing conceptsExperience with Snowflake, or other cloud data platforms, is preferredAbility to analyze and interpret dataExcellent written and verbal communication skillsAbility to work independently and as part of a teamStrong attention to detail and ability to work in a fast-paced environment

Posted 1 month ago

Apply

5.0 - 12.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Experience Range-5-12 Years Location-Mumbai/Pune/Bangalore/Hyderabad/Chennai Job Description: Candidate must be experienced working in projects involving Other ideal qualifications include experiences in Primarily looking for a data engineer with expertise in processing data pipelines using Databricks Spark SQL on Hadoop distributions like AWS EMR Data bricks Cloudera etc. Should be very proficient in doing large scale data operations using Databricks and overall very comfortable using Python Familiarity with AWS compute storage and IAM concepts Experience in working with S3 Data Lake as the storage tier Any ETL background Talend AWS Glue etc. is a plus but not required Cloud Warehouse experience Snowflake etc. is a huge plus Carefully evaluates alternative risks and solutions before taking action. Optimizes the use of all available resources Develops solutions to meet business needs that reflect a clear understanding of the objectives practices and procedures of the corporation department and business unit.

Posted 1 month ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 2-4 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in at least one of the data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Exposure to ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Understanding of Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Basics of Job scheduler like Autosys. Basics of Entitlement management Certification on any of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.

Posted 1 month ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

At PwC, our people in software and product innovation focus on developing cutting-edge software solutions and driving product innovation to meet the evolving needs of clients. These individuals combine technical experience with creative thinking to deliver innovative software products and solutions. In testing and quality assurance at PwC, you will focus on the process of evaluating a system or software application to identify any defects, errors, or gaps in its functionality. Working in this area, you will execute various test cases and scenarios to validate that the system meets the specified requirements and performs as expected. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Job Summary A career in our Managed Services team will provide you an opportunity to collaborate with a wide array of teams to help our clients implement and operate new capabilities, achieve operational efficiencies, and harness the power of technology. Our Analytics and Insights Managed Services team bring a unique combination of industry expertise, technology, data management and managed services experience to create sustained outcomes for our clients and improve business performance. We empower companies to transform their approach to analytics and insights while building your skills in exciting new directions. Have a voice at our table to help design, build and operate the next generation of software and services that manage interactions across all aspects of the value chain. Job Description To really stand out and make us fit for the future in a constantly changing world, each and every one of us at PwC needs to be a purpose-led and values-driven leader at every level. To help us achieve this we have the PwC Professional; our global leadership development framework. It gives us a single set of expectations across our lines, geographies and career paths, and provides transparency on the skills we need as individuals to be successful and progress in our careers, now and in the future. JD for ETL tester at Associate level As an ETL Tester, you will be responsible for designing, developing, and executing SQL scripts to ensure the quality and functionality of our ETL processes. You will work closely with our development and data engineering teams to identify test requirements and drive the implementation of automated testing solutions. Minimum Degree Required : Bachelor Degree Degree Preferred : Bachelors in Computer Engineering Minimum Years of Experience : 7 year(s) of IT experience Certifications Required : NA Certifications Preferred : Automation Specialist for TOSCA, Lambda Test Certifications Required Knowledge/Skills Collaborate with data engineers to understand ETL workflows and requirements. Perform data validation and testing to ensure data accuracy and integrity. Create and maintain test plans, test cases, and test data. Identify, document, and track defects, and work with development teams to resolve issues. Participate in design and code reviews to provide feedback on testability and quality. Develop and maintain automated test scripts using Python for ETL processes. Ensure compliance with industry standards and best practices in data testing. Qualifications Solid understanding of SQL and database concepts. Proven experience in ETL testing and automation. Strong proficiency in Python programming. Familiarity with ETL tools such as Apache NiFi, Talend, Informatica, or similar. Knowledge of data warehousing and data modeling concepts. Strong analytical and problem-solving skills. Excellent communication and collaboration abilities. Experience with version control systems like Git. Preferred Knowledge/Skills Demonstrates extensive knowledge and/or a proven record of success in the following areas: Experience with cloud platforms such as AWS, Azure, or Google Cloud. Familiarity with CI/CD pipelines and tools like Jenkins or GitLab. Knowledge of big data technologies such as Hadoop, Spark, or Kafka.

Posted 1 month ago

Apply

4.0 - 8.0 years

0 Lacs

India

On-site

About the Role As a BI/DW Developer (Software Engineer), your responsibilities are to: 1. Design and implement scalable ETL/ELT pipelines using tools like SQL, Python, or Spark. 2. Build and maintain data models and data marts to support analytics and reporting use cases. 3. Develop, maintain, and optimize dashboards and reports using BI tools such as Power BI, Tableau, or Looker. 4. Collaborate with stakeholders to understand data needs and translate them into technical requirements. 5. Perform data validation and ensure data quality and integrity across systems. 6. Monitor and troubleshoot data workflows and reports to ensure accuracy and performance. 7. Contribute to the data platform architecture, including database design and cloud data infrastructure. 8. Mentor junior team members and promote best practices in BI and DW development. Skills we’re looking for : Strong proficiency in SQL and experience working with large-scale relational databases (e.g., Snowflake, Redshift, BigQuery, PostgreSQL). Experience with modern ETL/ELT tools such as dbt, Apache Airflow, Talend, Informatica, or custom pipelines using Python. Proven expertise in one or more BI tools (Power BI, Tableau, Looker, etc.). Solid understanding of data warehousing concepts, dimensional modeling, and star/snowflake schemas. Strong problem-solving skills and attention to detail. Good verbal communication skills in English. Delivery focused, a go-getter! Work experience requirements : 4-8 years in BI/DW or Data Engineering roles Exposure to machine learning workflows or advanced analytics is a plus Whom this role is suited for: Available to join asap, and work from the Kochi office Experience in start-up environments preferred Job Location: Carnival Infopark, Kochi

Posted 1 month ago

Apply

6.0 years

0 Lacs

Bhopal, Madhya Pradesh, India

On-site

Experience: Min 6+ Years Job Title: Data Engineer – Real-Time Streaming & Integration (Apache Kafka) Location: Bhopal, Madhya Pradesh On-site role with opportunities to work on enterprise-scale data platforms Note: Resource working on site will be provided with accommodation, lunch, and dinner by the client for the complete project duration. The working week is 6 days (Monday – Saturday). Role Overview: We are seeking a highly skilled and experienced Data Engineer with 6+ years of experience in designing and implementing real-time data processing pipelines and streaming integrations. This role is ideal for professionals with deep expertise in Apache Kafka, Kafka Connect, and modern ETL/ELT processes. As a Data Engineer, you will play a critical role in building and optimizing data integration frameworks to support large-scale, low-latency, and high-throughput data platforms across enterprise systems. Your contributions will directly impact data accessibility, business intelligence, and operational efficiency. Key Responsibilities:  Design, develop, and maintain real-time streaming data pipelines using Apache Kafka and Kafka Connect.  Implement and optimize ETL/ELT processes for structured and semi-structured data from various sources.  Build and maintain scalable data ingestion, transformation, and enrichment frameworks across multiple environments.  Collaborate with data architects, analysts, and application teams to deliver integrated data solutions that meet business requirements.  Ensure high availability, fault tolerance, and performance tuning for streaming data infrastructure.  Monitor, troubleshoot, and enhance Kafka clusters, connectors, and consumer applications.  Enforce data governance, quality, and security standards throughout the pipeline lifecycle.  Automate workflows using orchestration tools and CI/CD pipelines for deployment and version control. Required Skills & Qualifications:  Strong hands-on experience with Apache Kafka, Kafka Connect, and Kafka Streams.  Expertise in designing real-time data pipelines and stream processing architectures.  Solid experience with ETL/ELT frameworks using tools like Apache NiFi, Talend, or custom Python/Scala-based solutions.  Proficiency in at least one programming language: Python, Java, or Scala.  Deep understanding of message serialization formats (e.g., Avro, Protobuf, JSON).  Strong SQL skills and experience working with data lakes, warehouses, or relational databases.  Familiarity with schema registry, data partitioning, and offset management in Kafka.  Experience with Linux environments, containerization, and CI/CD best practices. Preferred Qualifications:  Experience with cloud-native data platforms (e.g., AWS MSK, Azure Event Hubs, GCP Pub/Sub).  Exposure to stream processing engines like Apache Flink or Spark Structured Streaming.  Familiarity with data lake architectures, data mesh concepts, or real-time analytics platforms.  Knowledge of DevOps tools like Docker, Kubernetes, Git, and Jenkins. Work Experience:  6+ years of experience in data engineering with a focus on streaming data and real-time integrations.  Proven track record of implementing data pipelines in production-grade enterprise environments. Education Requirements:  Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field.  Certifications in data engineering, Kafka, Show more Show less

Posted 1 month ago

Apply

0.0 - 5.0 years

27 - 30 Lacs

Pune, Maharashtra

Remote

We’re Hiring: Talend Lead – ETL/Data Integration Location : Bengaluru / Hyderabad / Chennai / Pune (Hybrid – 1–2 days/week in-office) Position : Full-time | Open Roles : 1 Work Hours : 2 PM – 11 PM IST (Work from office till 6 PM, continue remotely after) CTC : ₹27–30 LPA (including 5% variable) Notice Period : 0–30 Days (Serving notice preferred) About the Role: We’re looking for a seasoned Talend Lead with 8+ years of experience, including 5–6 years specifically in Talend ETL development . This role demands hands-on technical expertise along with the ability to mentor teams, build scalable data integration pipelines, and contribute to high-impact enterprise data projects. Key Responsibilities: Lead and mentor a small team of Talend ETL developers (6+ months of lead experience acceptable) Design, build, and optimize data integration solutions using Talend and AWS Collaborate with business stakeholders and project teams to define requirements and architecture Implement robust and scalable ETL pipelines integrating various data sources: RDBMS, NoSQL, APIs, cloud platforms Perform advanced SQL querying , transformation logic, and performance tuning Ensure adherence to development best practices, documentation standards, and job monitoring Handle job scheduling , error handling , and data quality checks Stay updated with Talend platform features and contribute to the evolution of ETL frameworks Required Skills & Experience: 8+ years in data engineering/ETL, including 5+ years in Talend Minimum 6–12 months of team leadership or mentorship experience Proficient in Talend Studio , TAC/TMC , and AWS services (S3, Glue, Athena, EC2, RDS, Redshift) Strong command of SQL for data manipulation and transformation Experience integrating data from various formats and protocols: JSON, XML, REST/SOAP APIs, CSV, flat files Familiar with data warehousing principles , ETL/ELT processes , and data profiling Working knowledge of monitoring tools, job schedulers, and Git for version control Effective communicator with the ability to work in distributed teams Must not have short-term projects or gaps of more than 3 months No JNTU profiles will be considered Preferred Qualifications: Experience leading a team of 8+ Talend developers Experience working with US healthcare clients Bachelor’s degree in Computer Science/IT or related field Talend and AWS certifications (e.g., Talend Developer, AWS Cloud Practitioner) Knowledge of Terraform , GitLab , and CI/CD pipelines Familiarity with scripting (Python or Shell) Exposure to Big Data tools (Hadoop, Spark) and Talend Big Data Suite Experience working in Agile environments Job Types: Full-time, Permanent Pay: ₹2,700,000.00 - ₹3,000,000.00 per year Schedule: Day shift Evening shift Monday to Friday Ability to commute/relocate: Pune, Maharashtra: Reliably commute or planning to relocate before starting work (Required) Education: Bachelor's (Required) Experience: Talend: 8 years (Required) ETL: 5 years (Required) Work Location: In person

Posted 1 month ago

Apply

3.0 years

0 Lacs

Calcutta

On-site

Ready to shape the future of work? At Genpact, we don’t just adapt to change—we drive it. AI and digital innovation are redefining industries, and we’re leading the charge. Genpact’s AI Gigafactory, our industry-first accelerator, is an example of how we’re scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies’ most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that’s shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions – we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Lead Consultant - Sr.Data Engineer (DBT+Snowflake) ! In this role, the Sr.Data Engineer is responsible for providing technical direction and lead a group of one or more developer to address a goal. Job Description: Develop, implement, and optimize data pipelines using Snowflake, with a focus on Cortex AI capabilities. Extract, transform, and load (ETL) data from various sources into Snowflake, ensuring data integrity and accuracy. Implement Conversational AI solutions using Snowflake Cortex AI to facilitate data interaction through ChatBot agents. Collaborate with data scientists and AI developers to integrate predictive analytics and AI models into data workflows. Monitor and troubleshoot data pipelines to resolve data discrepancies and optimize performance. Utilize Snowflake's advanced features, including Snowpark, Streams, and Tasks, to enable data processing and analysis. Develop and maintain data documentation, best practices, and data governance protocols. Ensure data security, privacy, and compliance with organizational and regulatory guidelines. Roles and Responsibilities: • • Bachelor’s degree in Computer Science, Data Engineering, or a related field. • • experience in data engineering, with at least 3 years of experience working with Snowflake. • • Proven experience in Snowflake Cortex AI, focusing on data extraction, chatbot development, and Conversational AI. • • Strong proficiency in SQL, Python, and data modeling. • • Experience with data integration tools (e.g., Matillion, Talend, Informatica). • • Knowledge of cloud platforms such as AWS, Azure, or GCP. • • Excellent problem-solving skills, with a focus on data quality and performance optimization. • • Strong communication skills and the ability to work effectively in a cross-functional team. Proficiency in using DBT's testing and documentation features to ensure the accuracy and reliability of data transformations. Understanding of data lineage and metadata management concepts, and ability to track and document data transformations using DBT's lineage capabilities. Understanding of software engineering best practices and ability to apply these principles to DBT development, including version control, code reviews, and automated testing. Should have experience building data ingestion pipeline. Should have experience with Snowflake utilities such as SnowSQL, SnowPipe, bulk copy, Snowpark, tables, Tasks, Streams, Time travel, Cloning, Optimizer, Metadata Manager, data sharing, stored procedures and UDFs, Snowsight. Should have good experience in implementing CDC or SCD type 2 Proficiency in working with Airflow or other workflow management tools for scheduling and managing ETL jobs. Good to have experience in repository tools like Github/Gitlab, Azure repo Skill Matrix: DBT (Core or Cloud), Snowflake, AWS/Azure, SQL, ETL concepts, Airflow or any orchestration tools, Data Warehousing concepts Qualifications/Minimum qualifications B.E./ Masters in Computer Science, Information technology, or Computer engineering or any equivalent degree with good IT experience and relevant of working experience as a Sr. Data Engineer with DBT+Snowflake skillsets Why join Genpact? Be a transformation leader – Work at the cutting edge of AI, automation, and digital innovation Make an impact – Drive change for global enterprises and solve business challenges that matter Accelerate your career – Get hands-on experience, mentorship, and continuous learning opportunities Work with the best – Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture – Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let’s build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a 'starter kit,' paying to apply, or purchasing equipment or training. Job Lead Consultant Primary Location India-Kolkata Schedule Full-time Education Level Bachelor's / Graduation / Equivalent Job Posting Jun 19, 2025, 3:50:28 AM Unposting Date Ongoing Master Skills List Digital Job Category Full Time

Posted 1 month ago

Apply

7.0 years

0 Lacs

Gandhinagar, Gujarat, India

On-site

Key Responsibilities Lead and mentor a high-performing data pod composed of data engineers, data analysts, and BI developers. Design, implement, and maintain ETL pipelines and data workflows to support real-time and batch processing. Architect and optimize data warehouses for scale, performance, and security. Perform advanced data analysis and modeling to extract insights and support business decisions. Lead data science initiatives including predictive modeling, NLP, and statistical analysis. Manage and tune relational and non-relational databases (SQL, NoSQL) for availability and performance. Develop Power BI dashboards and reports for stakeholders across departments. Ensure data quality, integrity, and compliance with data governance and security standards. Work with cross-functional teams (product, marketing, ops) to turn data into strategy. Qualifications Required: PhD in Data Science, Computer Science, Engineering, Mathematics, or related field. 7+ years of hands-on experience across data engineering, data science, analysis, and database administration . Strong experience with ETL tools (e.g., Airflow, Talend, SSIS) and data warehouses (e.g., Snowflake, Redshift, BigQuery). Proficient in SQL , Python , and Power BI . Familiarity with modern cloud data platforms (AWS/GCP/Azure). Strong understanding of data modeling , data governance, and MLOps practices. Exceptional ability to translate business needs into scalable data solutions. Show more Show less

Posted 1 month ago

Apply

6.0 - 12.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Job Requirements Job Requirements Role/ Job Title: Senior Data Analyst Function/ Department: Data & Analytics Job Purpose Senior data Analyst (DG) will work within the Data & Analytics Office to implement data governance framework with a focus on improvement of data quality, standards, metrics, processes. Align data management practices with regulatory requirements. Understanding of lineage – How the data is produced, managed, and consumed within the Banks business process and system. Roles & Responsibilities Demonstrate Strong understanding of data governance, data quality, data lineage and metadata management concepts. Participate in the data quality governance framework design and optimization, including process, standards, rules etc. Design and implement data quality rules and Monitoring mechanism. Analyze data quality issue and collaborate with business stakeholders to address the issue resolution, Build recovery model across Enterprise. knowledge of DG technologies for data quality and metadata management (Oval edge, Talend, Collibra etc.) Support in development of Centralized Metadata repositories (Business glossary, technical metadata etc.), Captures business/Data quality rules and design DQ reports & Dashboards. Improve data literacy among the stakeholders. Minimum 6 to 12 years of experience in Data governance with Banking Domain preferable Education Qualification Graduation: Bachelor of Science (B.Sc) / Bachelor of Technology (B.Tech) / Bachelor of Computer Applications (BCA) Post-graduation: Master of Science (M.Sc) /Master of Technology (M.Tech) / Master of Computer Applications (MCA) Experience: 5 to 10 years of relevant experience. Show more Show less

Posted 1 month ago

Apply

5.0 - 8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Job description: Job Description Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. ͏ Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements ͏ Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s ͏ Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks ͏ Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Talend DI . Experience: 5-8 Years . Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome. Show more Show less

Posted 1 month ago

Apply

6.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Hiring!! Hiring!! Hiring!! We have an opening Data Engineer for at Gurugram Job title: Data Engineer / Data Platforms (AWS) Experience: 6+years Location: Gurgaon NP: immediate/serving (upto 15 days) Responsibility: Mandatory Skills: AWS, Hadoop, Apache Spark, Hive, SQL Nice to Have skills: Power BI Talend Control M Shell Scriptin Airflow Job Description : Work experience in Oracle Database and Structured Query Language (SQL), have knowledge with Big Data Technologies such as Hadoop, Apache Spark, and Hive and ETL (Extract, Transform, Load), Knowledge in Cloud Platforms particularly Amazon Web Services (AWS), Version Control using Git, Linux environment, Basic understanding of machine learning concepts, Experience with Power BI for data visualization. Basic Shell scripting and Talend ETL Added advantages : Scheduling tools - control-m, airflow Monitoring tools : Sumologic, SPLUNK Process familiarity of Incident Management Show more Show less

Posted 1 month ago

Apply

8.0 years

0 Lacs

India

Remote

Job Title : Data Testing Engineer Exp : 8+ years Location : Remote Notice Period : Immediate to 15 days Job Description : Develop, maintain, and execute test cases to validate the accuracy, completeness, and consistency of data across different layers of the data warehouse. ● Test ETL processes to ensure that data is correctly extracted, transformed, and loaded from source to target systems while adhering to business rules ● Perform source-to-target data validation to ensure data integrity and identify any discrepancies or data quality issues. ● Develop automated data validation scripts using SQL, Python, or testing frameworks to streamline and scale testing efforts. ● Conduct testing in cloud-based data platforms (e.g., AWS Redshift, Google BigQuery, Snowflake), ensuring performance and scalability. ● Familiarity with ETL testing tools and frameworks (e.g., Informatica, Talend, dbt). ● Experience with scripting languages to automate data testing. ● Familiarity with data visualization tools like Tableau, Power BI, or Looker Show more Show less

Posted 1 month ago

Apply

10.0 years

0 Lacs

Bengaluru, Karnataka

On-site

Bangalore,Karnataka,India Job ID 768618 Join our Team About this opportunity: Data and Analytics is a function with the aim to craft and execute our Analytics and AI strategy, vision and to ensure multi-functional execution for Ericsson to transform into an Intelligent enterprise. AI and Data is a key theme for Ericsson to become data driven, and we continue to build up elite competence in this area to increase performance, deliver intelligence and to ensure, secure and scalable development, deployment and continued operations of Analytics and AI solutions. What you will do: Facilitate requirement analysis with Stakeholders for Data Collection, Analytics, & Machine Learning Requirements Work with IT Digital Product Owner to ensure understanding of business requirements in Data Management for the respective functional area Identify and propose new opportunities for data services Define characteristics for wanted scale, redundancy, distribution, protection, etc. criteria for different types of data services Work with Developers and other FA Architects for cross functional data requirements Support with technical lead scrum teams during development. Define and design appropriate interfaces for data update, retrieval queries and recommended workflows based on specific data use case The skills you bring: Strong prior experience in Data Warehousing and BI, irrespective of technology. Prior experience as an Architect with leading delivery team Strong hands-on prior experience as BI/ETL developer. Education / Experience Required (In Years) We are looking for a candidate with 10+ years of experience as Data Architect, Solution Architect or Data Engineer who has attained a Graduate degree (B.E/ BTech/Mtech ) in Computer Science and Information Systems or another quantitative field. Software/tools: Following experiences and skills are required. Strong understanding and experience on Snowflake Architecture Strong Data Modelling Experience Experienced on Batch and streaming processing (for instance Apache Spark, Storm) and relevant back-end languages such as Python, Pyspark and SQL Software Development including Agile/Scrum, CI/CD, Testing and Configuration Management, GIT/Gerrit BI Tools Such as Tableau, PowerBI, and also web-based dashboards and solutions Any of the experiences and skills listed below is a merit, the more the better Cloud Agnostic Data Platforms such as Snowflake, Databricks and SAP Data Warehouse Cloud Able to create the design and data modelling independently using data vault 2.0, Big data technologies, such as Hadoop, Hive, and Pig, MAPR or Enterprise data warehousing initiatives Cloud Native Data Platforms (Azure Synapse Analytics, Amazon Redshift) Azure services – Azure Data factory (ADF), Azure Databricks (ADB) Designing data products and publishing data using data marketplace platform Authorization methods including Role Based Access Control and Policy Based Access Control. Data mesh implementation and federated model for data product development Relational Data Modeling with 3NF Modern Data Architectures including cloud native, microservices architecture, virtualization, Kubernetes and containerization. Messaging, (for instance Apache Kafka) NoSQL DBs, like Cassandra and MongoDB RDBMS such Oracle, MS SQL, MariaDB or PostGRE, both row based and columnar based Experience with ETL tools (Talend, Informatica, BODS etc.) Why join Ericsson? At Ericsson, you´ll have an outstanding opportunity. The chance to use your skills and imagination to push the boundaries of what´s possible. To build solutions never seen before to some of the world’s toughest problems. You´ll be challenged, but you won’t be alone. You´ll be joining a team of diverse innovators, all driven to go beyond the status quo to craft what comes next. What happens once you apply?

Posted 1 month ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana

On-site

Job Information Date Opened 06/19/2025 Job Type Full time Work Experience 5+ years Industry IT Services City Hyderabad State/Province Telangana Country India Zip/Postal Code 500032 Job Description As a Data Governance Lead at Kanerika, you will be responsible for defining, leading, and operationalizing the data governance framework, ensuring enterprise-wide alignment and regulatory compliance. Key Responsibilities: 1. Governance Strategy & Stakeholder Alignment Develop and maintain enterprise data governance strategies, policies, and standards. Align governance with business goals: compliance, analytics, and decision-making. Collaborate across business, IT, legal, and compliance teams for role alignment. Drive governance training, awareness, and change management programs. 2. Microsoft Purview Administration & Implementation Manage Microsoft Purview accounts, collections, and RBAC aligned to org structure. Optimize Purview setup for large-scale environments (50TB+). Integrate with Azure Data Lake, Synapse, SQL DB, Power BI, Snowflake. Schedule scans, set classification jobs, and maintain collection hierarchies. 3. Metadata & Lineage Management Design metadata repositories and maintain business glossaries and data dictionaries. Implement ingestion workflows via ADF, REST APIs, PowerShell, Azure Functions. Ensure lineage mapping (ADF Synapse Power BI) and impact analysis. 4. Data Classification & Security Governance Define classification rules and sensitivity labels (PII, PCI, PHI). Integrate with MIP, DLP, Insider Risk Management, and Compliance Manager. Enforce records management, lifecycle policies, and information barriers. 5. Data Quality & Policy Management Define KPIs and dashboards to monitor data quality across domains. Collaborate on rule design, remediation workflows, and exception handling. Ensure policy compliance (GDPR, HIPAA, CCPA, etc.) and risk management. 6. Business Glossary & Stewardship Maintain business glossary with domain owners and stewards in Purview. Enforce approval workflows, standard naming, and steward responsibilities. Conduct metadata audits for glossary and asset documentation quality. 7. Automation & Integration Automate governance processes using PowerShell, Azure Functions, Logic Apps. Create pipelines for ingestion, lineage, glossary updates, tagging. Integrate with Power BI, Azure Monitor, Synapse Link, Collibra, BigID, etc. 8. Monitoring, Auditing & Compliance Set up dashboards for audit logs, compliance reporting, metadata coverage. Oversee data lifecycle management across its phases. Support internal and external audit readiness with proper documentation. Requirements 7+ years of experience in data governance and data management. Proficient in Microsoft Purview and Informatica data governance tools. Strong in metadata management, lineage mapping, classification, and security. Experience with ADF, REST APIs, Talend, dbt, and automation via Azure tools. Knowledge of GDPR, CCPA, HIPAA, SOX and related compliance needs.Skilled in bridging technical governance with business and compliance goals. Benefits Culture: Open Door Policy: Encourages open communication and accessibility to management. Open Office Floor Plan: Fosters a collaborative and interactive work environment. Flexible Working Hours: Allows employees to have flexibility in their work schedules. Employee Referral Bonus: Rewards employees for referring qualified candidates. Appraisal Process Twice a Year: Provides regular performance evaluations and feedback. 2. Inclusivity and Diversity: Hiring practices that promote diversity: Ensures a diverse and inclusive workforce. Mandatory POSH training: Promotes a safe and respectful work environment. 3. Health Insurance and Wellness Benefits: GMC and Term Insurance: Offers medical coverage and financial protection. Health Insurance: Provides coverage for medical expenses. Disability Insurance: Offers financial support in case of disability. 4. Child Care & Parental Leave Benefits: Company-sponsored family events: Creates opportunities for employees and their families to bond. Generous Parental Leave: Allows parents to take time off after the birth or adoption of a child. Family Medical Leave: Offers leave for employees to take care of family members' medical needs. 5. Perks and Time-Off Benefits: Company-sponsored outings: Organizes recreational activities for employees. Gratuity: Provides a monetary benefit as a token of appreciation. Provident Fund: Helps employees save for retirement. Generous PTO: Offers more than the industry standard for paid time off. Paid sick days: Allows employees to take paid time off when they are unwell. Paid holidays: Gives employees paid time off for designated holidays. Bereavement Leave: Provides time off for employees to grieve the loss of a loved one. 6. Professional Development Benefits: L&D with FLEX- Enterprise Learning Repository: Provides access to a learning repository for professional development. Mentorship Program: Offers guidance and support from experienced professionals. Job Training: Provides training to enhance job-related skills. Professional Certification Reimbursements: Assists employees in obtaining professional certifications.

Posted 1 month ago

Apply

10.0 - 15.0 years

25 - 30 Lacs

Noida, Gurugram, Delhi / NCR

Work from Office

Role & responsibilities 10+ years of related experience Data Engineers have to manage ETL Design Requirements, their implementation and maintenance. They should be able to develop/Deploy/Test and manage ETL Package solutions. They have to design/develop ETL jobs and design data models as per the analytics requirements. Data modeling Engineer have to Develop, deploy and maintain Data Models. They have to do dimensional modeling for data marts. They are expected to integrate and map data from multiple and complex data sources. Their role includes development of models/Cubes for depicting online and in performance tuning the same so that user queries are addressed. Data Steward should be capable of handling Data Quality interventions for Large enterprise projects. They have to understand the data quality issues and have to develop data quality related solutions using the tools (Python, SQL, Dataflux etc.). They Should be familiar with data cataloguing tools, Talend, Informatica etc. They have to manage PFMS Data Dictionary and have to do Data Cleaning and Implementation of Exception handling for ETL Packages. Minimum 10 years of Experience in Data Warehousing Projects. Must have experience of handling Data Quality interventions for Large enterprise projects. Should understand data quality issues and should have capability of using a tool (Python, SQL, Dataflux etc.) for developing data quality related solutions. Should be familiar with data cataloguing tools, Talend, Informatica etc. Qualification : BE / B. Tech / M. Tech/MCA in Computer/IT)/ Electronics & Tele-Communication Engineering Certification: MCSE: Data Management and Analytics, Microsoft Certified: Azure Data Engineer Associate CompTIA Data+ Microsoft Certified: Data Analyst Associate SAS Certified Big Data Professional BE / B. Tech / M. Tech/MCA in Computer/IT)/ Electronics & Tele-Communication Engineering Certification: MCSE: Data Management and Analytics, Microsoft Certified: Azure Data Engineer Associate CompTIA Data+ Microsoft Certified: Data Analyst Associate SAS Certified Big Data Professional

Posted 1 month ago

Apply

15.0 - 24.0 years

45 - 100 Lacs

Chennai

Remote

What can you expect in a Director of Data Engineering role with TaskUs: Key Responsibilities: Manage a geographically diverse team of Managers/Senior Managers of Data Engineering responsible for the ETL to process, transform, and derive attributes for all operational data for reporting and analytics use from various transactional systems. Sets and enforces BI standards and architecture. Aligns BI architecture with enterprise architecture. Partner with business leaders, technology leaders, and other stakeholders to champion the strategy, design, development, launch, and management of cloud data engineering-related projects and initiatives that can scale and rapidly meet strategic and business objectives Define cloud data engineering strategy, roadmap, and strategic execution steps Collaborate with business leadership and technology partners to leverage data to support and optimize efficiencies Define, design & implement processes for data integration and data management on cloud data platforms, primarily AWS Accountable for management of project prioritization, progress, and workload management across cloud data engineering staff to ensure on-time delivery Review and manage the ticketing queue to ensure timely assignment and progression of support tickets Work directly with the IT Application Teams and other IT areas to understand, assess requirements, and prioritize a backlog of cloud services needed to be delivered to enable transformation Conduct comprehensive need assessments to create and implement modernized serverless data architecture plan that supports the business analytics and reporting needs Establish IT Data & Analysis standards, practices, and security measures to ensure effective and consistent information processing and consistent data quality/accessibility Help architect cloud data engineering source to target auditing altering solutions to ensure data quality Responsible for data architecture, ETL, backup, and security of new AWS-based data lake framework Conducts data quality initiatives to rid the system of old, unused, or duplicate data. Oversees complex data modeling and advanced project metadata development. Ensures that business rules are consistently applied across different user interfaces to limit the possibility of inconsistent results. Managed architected the migration of on-premise DW SQL server star schema to Redshift Designs specifications and standards for semantic layers and multidimensional models for complex BI projects, across all environments. Consults on training and usage for the business community by selecting appropriate BI tools, features, and techniques. Required Qualifications: A People Leader with strong stakeholder management experience Strong knowledge of Data Warehousing concepts with an understanding of traditional and MPP database designs, star and snowflake schemas, and database migration experience, with 10 Years of experience in data modeling. You must have at least 8 years of hands-on development experience using ETL Tools such as Pentaho, AWS Glue, Talend, or Airflow. Knowledge of the architecture, design, and implementation of MPP Databases such as Teradata, Snowflake, or Redshift. 5 years of experience in development using Cloud-based analytics solutions preferable (AWS). Knowledge of designing and implementing streaming pipelines using Apache Kafka, Apache Spark, and Fivetran Segment. At least 5 years experience in using Python in a cloud-based environment is a plus. Knowledge of NoSQL DBs such as MongoDB is not required but preferred. Structured thinker and effective communicator. Education / Certifications: Bachelors degree in Computer Science, Information Technology, or related fields (MBA or MS degree is a plus) or 15 to 20 years relevant experience in lieu of a degree. Work Location / Work Schedule / Travel: Remote (Global) How We Partner To Protect You: Task Us will neither solicit money from you during your application process nor require any form of payment in order to proceed with your application. Kindly ensure that you are always in communication with only authorized recruiters of Task Us. DEI: In Task Us we believe that innovation and higher performance are brought by people from all walks of life. We welcome applicants of different backgrounds, demographics, and circumstances. Inclusive and equitable practices are our responsibility as a business. Task Us is committed to providing equal access to opportunities. If you need reasonable accommodations in any part of the hiring process, please let us know. We invite you to explore all Task Us career opportunities and apply through the provided URL https://www.taskus.com/careers/ .

Posted 1 month ago

Apply

8.0 years

0 Lacs

Mylapore, Tamil Nadu, India

On-site

Position Summary Company : Fives India Engineering & Projects Pvt. Ltd. Job Title : Data Analyst/Senior Data Analyst (BI developer) Job Location : Chennai, Tamil Nadu, India Job Department : IT Educational Qualification : BE/B.Tech/MCA from a reputed Institute in Computer Science or related field Work Experience : 4 – 8 years Job Description : Fives is a global industrial engineering group based in Paris, France, that designs and supplies machines, process equipment and production lines for the world’s largest industrial sectors including aerospace, automotive, steel, aluminium, glass, cement, logistics and energy. Headquartered in Paris, Fives is located in about 25 countries with more than 9000 employees. Fives is seeking a Data Analyst/ Senior Data Analyst for their office located in Chennai, India. The position is an integral part of the Group IT development team working on custom software solutions for the Group IT requirements. We are looking for analyst specialized in BI development. Required Skills Applicant should have skills/experience in the following area: 4 – 8 years’ of experience in Power BI development Good understanding of data visualization concepts. Proficiency in writing DAX expressions and Power Query Knowledge of SQL and database related technologies Source control such as GIT Proficient in building REST APIs to interact with data sources Familiarity with ETL/ELT concepts and tools such as Talend is a plus Good knowledge of programming, algorithms and data structures Ability to use Agile collaboration tools such as Jira Good communication skills both verbal and written Willingness to learn new technologies and tools Position Type Full-Time/Regular Show more Show less

Posted 1 month ago

Apply

7.0 - 12.0 years

13 - 17 Lacs

Noida

Work from Office

Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities: Work with large, diverse datasets to deliver predictive and prescriptive analytics Develop innovative solutions using data modeling, machine learning, and statistical analysis Design, build, and evaluate predictive and prescriptive models and algorithms Use tools like SQL, Python, R, and Hadoop for data analysis and interpretation Solve complex problems using data-driven approaches Collaborate with cross-functional teams to align data science solutions with business goals Lead AI/ML project execution to deliver measurable business value Ensure data governance and maintain reusable platforms and tools Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Technical Skills Programming Languages: Python, R, SQL Machine Learning Tools: TensorFlow, PyTorch, scikit-learn Big Data Technologies: Hadoop, Spark Visualization Tools: Tableau, Power BI Cloud Platforms: AWS, Azure, Google Cloud Data Engineering: Talend, Data Bricks, Snowflake, Data Factory Statistical Software: R, Python libraries Version Control: Git Preferred Qualifications: Masters or PhD in Data Science, Computer Science, Statistics, or related field Certifications in data science or machine learning 7+ years of experience in a senior data science role with enterprise-scale impact Experience managing AI/ML projects end-to-end Solid communication skills for technical and non-technical audiences Demonstrated problem-solving and analytical thinking Business acumen to align data science with strategic goals Knowledge of data governance and quality standards At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone of every race, gender, sexuality, age, location and income deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes an enterprise priority reflected in our mission. #Nic

Posted 1 month ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Position : Senior SQL Database Developer / Architect Location: Hinjewadi Phase-1, Pune (WFO) Experience : 7 + years Shift : 10:30 AM to 7:30 PM Working Days : Monday to Friday Notice Period : Immediate to 15 Days Job Description: Futurism Tech is seeking an experienced SQL Database Developer / Architect with a minimum of 8 years of experience in designing, developing, and architecting complex database systems. The ideal candidate will have a strong foundation in SQL development, data modeling, performance tuning, and database architecture. This role involves both hands-on coding and high-level architectural planning to support scalable, secure, and efficient data solutions. Key Responsibilities: Design, develop, and maintain scalable and high-performance SQL databases. Define and implement database architecture standards, best practices, and design patterns. Build robust data models (logical and physical) for transactional and analytical systems. Optimize existing SQL queries, indexing strategies, and schema design to ensure performance and scalability. Collaborate with software developers, business analysts, and DevOps teams to implement end-to-end data solutions. Ensure data integrity, consistency, and availability across environments. Lead database design reviews and provide technical guidance on data storage and access strategies. Manage database lifecycle including schema changes, upgrades, backups, and recovery strategies. Evaluate new technologies and tools for improving database performance and architecture. Required Qualifications: 8+ years of experience in SQL database development and architecture. Deep expertise in SQL Server (or other RDBMS like Oracle, MySQL, PostgreSQL). Strong knowledge of database design principles, normalization, and performance tuning. Proven experience in designing scalable and secure database architectures. Proficient in writing complex stored procedures, views, triggers, and functions. Experience with ETL tools (e.g., SSIS, Informatica, Talend) and data integration strategies. Understanding of high availability, disaster recovery, and replication strategies. Familiarity with DevOps tools and CI/CD practices for database deployments. Excellent problem-solving and system design skills. Qualifications: Bachelor's degree in Computer Science, Bachelor of Engineering/Technology - BE/BTech (or equivalent experience) If you are interested share the updated resume on sanyogitas@futurismtechnologies.com or can connect on +91 (20) 67120700 Extn 201 /9226554403 Show more Show less

Posted 1 month ago

Apply

6.0 - 10.0 years

13 - 17 Lacs

Chennai

Work from Office

Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build whats next for their businesses. Your role You act as a contact person for our customers and advise them on data-driven projects. You are responsible for architecture topics and solution scenarios in the areas of Cloud Data Analytics Platform, Data Engineering, Analytics and Reporting. Experience in Cloud and Big Data architecture. Responsibility for designing viable architectures based on Microsoft Azure, AWS, Snowflake, Google (or similar) and implementing analytics. Experience in DevOps, Infrasturcure as a code, DataOps, MLOps. Experience in business development (as well as your support in the proposal process). Data warehousing, data modelling and data integration for enterprise data environments. Experience in design of large scale ETL solutions integrating multiple / heterogeneous systems. Experience in data analysis, modelling (logical and physical data models) and design specific to a data warehouse / Business Intelligence environment (normalized and multi-dimensional modelling). Experience with ETL tools primarily Talend and/or any other Data Integrator tools (Open source / proprietary), extensive experience with SQL and SQL scripting (PL/SQL & SQL query tuning and optimization) for relational databases such as PostgreSQL, Oracle, Microsoft SQL Server and MySQL etc., and on NoSQL like MongoDB and/or document-based databases. Must be detail oriented, highly motivated and work independently with minimal direction. Excellent written, oral and interpersonal communication skills with ability to communicate design solutions to both technical and non-technical audiences. Ideally: Experience in agile methods such as safe, scrum, etc. Ideally: Experience on programming languages like Python, JavaScript, Java/ Scala etc. Your Profile Provides data services for enterprise information strategy solutions - Works with business solutions leaders and teams to collect and translate information requirements into data to develop data-centric solutions. Design and develop modern enterprise data centric solutions (e.g. DWH, Data Lake, Data Lakehouse) Responsible for designing of data governance solutions. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.

Posted 1 month ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies