Home
Jobs

591 Talend Jobs - Page 19

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Role Description Key Responsibilities: Develop And Maintain Web Applications Build dynamic, user-centric web applications using React, React Hooks, and modern web technologies like HTML5 and CSS3. Ensure that the application is scalable, maintainable, and easy to navigate for end-users. Collaborate With Cross-Functional Teams Work closely with designers and product teams to bring UI/UX designs to life, ensuring the design vision is executed effectively using HTML and CSS. Ensure the application is responsive, performing optimally across all devices and browsers. State Management Utilize Redux to manage and streamline complex application states, ensuring seamless data flow and smooth user interactions. Component Development Develop reusable, modular, and maintainable React components using React Hooks and CSS/SCSS to style components effectively. Build component libraries and implement best practices to ensure code maintainability and reusability. Role Proficiency This role requires proficiency in data pipeline development including coding and testing data pipelines for ingesting wrangling transforming and joining data from various sources. Must be skilled in ETL tools such as Informatica Glue Databricks and DataProc with coding expertise in Python PySpark and SQL. Works independently and has a deep understanding of data warehousing solutions including Snowflake BigQuery Lakehouse and Delta Lake. Capable of calculating costs and understanding performance issues related to data solutions. Outcomes Act creatively to develop pipelines and applications by selecting appropriate technical options optimizing application development maintenance and performance using design patterns and reusing proven solutions.rnInterpret requirements to create optimal architecture and design developing solutions in accordance with specifications. Document and communicate milestones/stages for end-to-end delivery. Code adhering to best coding standards debug and test solutions to deliver best-in-class quality. Perform performance tuning of code and align it with the appropriate infrastructure to optimize efficiency. Validate results with user representatives integrating the overall solution seamlessly. Develop and manage data storage solutions including relational databases NoSQL databases and data lakes. Stay updated on the latest trends and best practices in data engineering cloud technologies and big data tools. Influence and improve customer satisfaction through effective data solutions. Measures Of Outcomes Adherence to engineering processes and standards Adherence to schedule / timelines Adhere to SLAs where applicable # of defects post delivery # of non-compliance issues Reduction of reoccurrence of known defects Quickly turnaround production bugs Completion of applicable technical/domain certifications Completion of all mandatory training requirements Efficiency improvements in data pipelines (e.g. reduced resource consumption faster run times). Average time to detect respond to and resolve pipeline failures or data issues. Number of data security incidents or compliance breaches. Outputs Expected Code Development: Develop data processing code independently ensuring it meets performance and scalability requirements. Define coding standards templates and checklists. Review code for team members and peers. Documentation Create and review templates checklists guidelines and standards for design processes and development. Create and review deliverable documents including design documents architecture documents infrastructure costing business requirements source-target mappings test cases and results. Configuration Define and govern the configuration management plan. Ensure compliance within the team. Testing Review and create unit test cases scenarios and execution plans. Review the test plan and test strategy developed by the testing team. Provide clarifications and support to the testing team as needed. Domain Relevance Advise data engineers on the design and development of features and components demonstrating a deeper understanding of business needs. Learn about customer domains to identify opportunities for value addition. Complete relevant domain certifications to enhance expertise. Project Management Manage the delivery of modules effectively. Defect Management Perform root cause analysis (RCA) and mitigation of defects. Identify defect trends and take proactive measures to improve quality. Estimation Create and provide input for effort and size estimation for projects. Knowledge Management Consume and contribute to project-related documents SharePoint libraries and client universities. Review reusable documents created by the team. Release Management Execute and monitor the release process to ensure smooth transitions. Design Contribution Contribute to the creation of high-level design (HLD) low-level design (LLD) and system architecture for applications business components and data models. Customer Interface Clarify requirements and provide guidance to the development team. Present design options to customers and conduct product demonstrations. Team Management Set FAST goals and provide constructive feedback. Understand team members' aspirations and provide guidance and opportunities for growth. Ensure team engagement in projects and initiatives. Certifications Obtain relevant domain and technology certifications to stay competitive and informed. Skill Examples Proficiency in SQL Python or other programming languages used for data manipulation. Experience with ETL tools such as Apache Airflow Talend Informatica AWS Glue Dataproc and Azure ADF. Hands-on experience with cloud platforms like AWS Azure or Google Cloud particularly with data-related services (e.g. AWS Glue BigQuery). Conduct tests on data pipelines and evaluate results against data quality and performance specifications. Experience in performance tuning of data processes. Expertise in designing and optimizing data warehouses for cost efficiency. Ability to apply and optimize data models for efficient storage retrieval and processing of large datasets. Capacity to clearly explain and communicate design and development aspects to customers. Ability to estimate time and resource requirements for developing and debugging features or components. Knowledge Examples Knowledge Examples Knowledge of various ETL services offered by cloud providers including Apache PySpark AWS Glue GCP DataProc/DataFlow Azure ADF and ADLF. Proficiency in SQL for analytics including windowing functions. Understanding of data schemas and models relevant to various business contexts. Familiarity with domain-related data and its implications. Expertise in data warehousing optimization techniques. Knowledge of data security concepts and best practices. Familiarity with design patterns and frameworks in data engineering. Additional Comments "Frontend developer Required Skills and Experience: React.js Proficiency: - In-depth knowledge of React.js, JSX, React Hooks, and React Router. - Experience with state management using Redux or similar libraries. - Familiar with React performance optimization techniques, including lazy loading, memoization, and code splitting. - Experience with tools like react-testing-library, NPM (vite, Yup, Formik). CSS Expertise: - Strong proficiency in CSS, including the use of third-party frameworks like Material-UI (MUI) and Tailwind CSS for styling. - Ability to create responsive, visually appealing layouts with modern CSS practices. JavaScript/ES6+ Expertise: - Strong command of modern JavaScript (ES6+), including async/await, destructuring, and class-based components. - Familiarity with other JavaScript frameworks and libraries such as TypeScript is a bonus. Version Control: - Proficient with Git and platforms like GitHub or GitLab, including managing pull requests and version control workflows. API Integration: - Experienced in interacting with RESTful APIs. - Understanding of authentication mechanisms like JWT. Testing: - Able to write unit, integration, and end-to-end tests using tools such as react-testing-library. ------------------------------------------------------------------------------------------------------------------- Basic Qualifications: - At least 3 years of experience working with JavaScript frameworks, particularly React.js. - Experience working in cloud environments (AWS, Azure, Google Cloud) is a plus. - Basic understanding of backend technologies such as Python is advantageous." Skills Cloud Services,Backend Systems,Css Show more Show less

Posted 3 weeks ago

Apply

5.0 - 10.0 years

15 - 22 Lacs

Hyderabad

Work from Office

Naukri logo

data engineering, data management, Snowflake , SQL, data modeling, and cloud-native data architecture AWS, Azure, or Google Cloud (Snowflake on cloud platforms) ETL tools such as Informatica, Talend, or dbt Python or Shell scripting

Posted 3 weeks ago

Apply

4.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Lead Data Engineer – C12 / Assistant Vice President (India) The Role The Data Engineer is accountable for developing high quality data products to support the Bank’s regulatory requirements and data driven decision making. A Data Engineer will serve as an example to other team members, work closely with customers, and remove or escalate roadblocks. By applying their knowledge of data architecture standards, data warehousing, data structures, and business intelligence they will contribute to business outcomes on an agile team. Responsibilities Developing and supporting scalable, extensible, and highly available data solutions Deliver on critical business priorities while ensuring alignment with the wider architectural vision Identify and help address potential risks in the data supply chain Follow and contribute to technical standards Design and develop analytical data models Required Qualifications & Work Experience First Class Degree in Engineering/Technology (4-year graduate course) 8 to 12 years’ experience implementing data-intensive solutions using agile methodologies Experience of relational databases and using SQL for data querying, transformation and manipulation Experience of modelling data for analytical consumers Ability to automate and streamline the build, test and deployment of data pipelines Experience in cloud native technologies and patterns A passion for learning new technologies, and a desire for personal growth, through self-study, formal classes, or on-the-job training Excellent communication and problem-solving skills An inclination to mentor; an ability to lead and deliver medium sized components independently T echnical Skills (Must Have) ETL: Hands on experience of building data pipelines. Proficiency in two or more data integration platforms such as Ab Initio, Apache Spark, Talend and Informatica Big Data: Experience of ‘big data’ platforms such as Hadoop, Hive or Snowflake for data storage and processing Data Warehousing & Database Management: Expertise around Data Warehousing concepts, Relational (Oracle, MSSQL, MySQL) and NoSQL (MongoDB, DynamoDB) database design Data Modeling & Design: Good exposure to data modeling techniques; design, optimization and maintenance of data models and data structures Languages: Proficient in one or more programming languages commonly used in data engineering such as Python, Java or Scala DevOps: Exposure to concepts and enablers - CI/CD platforms, version control, automated quality control management Data Governance: A strong grasp of principles and practice including data quality, security, privacy and compliance Technical Skills (Valuable) Ab Initio: Experience developing Co>Op graphs; ability to tune for performance. Demonstrable knowledge across full suite of Ab Initio toolsets e.g., GDE, Express>IT, Data Profiler and Conduct>IT, Control>Center, Continuous>Flows Cloud: Good exposure to public cloud data platforms such as S3, Snowflake, Redshift, Databricks, BigQuery, etc. Demonstratable understanding of underlying architectures and trade-offs Data Quality & Controls: Exposure to data validation, cleansing, enrichment and data controls Containerization: Fair understanding of containerization platforms like Docker, Kubernetes File Formats: Exposure in working on Event/File/Table Formats such as Avro, Parquet, Protobuf, Iceberg, Delta Others: Experience of using a Job scheduler e.g., Autosys. Exposure to Business Intelligence tools e.g., Tableau, Power BI Certification on any one or more of the above topics would be an advantage. ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Digital Software Engineering ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 3 weeks ago

Apply

6.0 - 10.0 years

22 - 25 Lacs

Hyderabad

Hybrid

Naukri logo

Design, build and maintain complex ELT/ETL jobs that deliver business value. Extract, transform and load data from various sources including databases, APIs, and flat files using IICS or Python/SQL. Translate high-level business requirements into technical specs Conduct unit testing, integration testing, and system testing of data integration solutions to ensure accuracy and quality Ingest data from disparate sources into the data lake and data warehouse Cleanse and enrich data and apply adequate data quality controls Provide technical expertise and guidance to team members on Informatica IICS/IDMC and data engineering best practices to guide the future development of companys Data Platform. Develop re-usable tools to help streamline the delivery of new projects Collaborate closely with other developers and provide mentorship Evaluate and recommend tools, technologies, processes and reference Architectures Work in an Agile development environment, attending daily stand-up meetings and delivering incremental improvements Participate in code reviews and ensure all solutions are lined to architectural and requirement specifications and provide feedback on code quality, design, and performance

Posted 3 weeks ago

Apply

3.0 - 7.0 years

10 - 18 Lacs

Pune

Work from Office

Naukri logo

What Ibexlabs Does Ibexlabs is an AWS Advanced Tier Consulting Partner with multiple competencies, including Security, DevOps, Healthcare, and Managed Services. Our team of dedicated and highly skilled engineers is passionate about helping customers accelerate their cloud transformation while ensuring security and compliance with industry best practices. As a rapidly growing company, we are seeking talented individuals to join us and contribute to our continued success. Position Details Job Purpose We are seeking a skilled Talend Developer to design, develop, and implement ETL processes using Talend Data Integration tools. The ideal candidate will have hands-on experience working with large datasets, building scalable data pipelines, and ensuring data integrity and quality across complex systems. Responsibilities and Requirements Design, develop, and maintain ETL processes using Talend (Open Studio / Talend Data Fabric) . Collaborate with data architects, analysts, and business stakeholders to gather and understand data requirements. Extract data from a variety of sources (databases, flat files, APIs, cloud platforms) and load into target systems (data warehouses, Snowflake, etc.). Optimize and troubleshoot ETL workflows for performance and reliability. Implement data quality checks, transformations, and validations. Monitor daily ETL processes and proactively resolve issues. Create and maintain technical documentation related to ETL processes and data pipelines. Ensure compliance with data governance, security, and privacy policies. Support data migration and integration projects from legacy systems to modern platforms. Skills and Qualifications: Bachelors degree in Computer Science, Information Systems, or related field. 3+ years of hands-on experience with Talend (preferably Talend Data Integration or Talend Big Data). Strong knowledge of ETL concepts, data modeling , and data warehousing . Proficient in SQL and working with relational databases (e.g., Oracle, SQL Server, MySQL, PostgreSQL). Experience with cloud data platforms such as Snowflake, AWS, Azure , or GCP is a plus. Familiarity with big data technologies (Hadoop, Spark) is a plus. Strong analytical and problem-solving skills. Excellent verbal and written communication skills. Why should you be interested in this opportunity? Your freedom and opportunity to grow rapidly in your career. You will be fully empowered by tools and knowledge to grow in your career as well as helping your team members grow. A culture of respect, humility, growth mindset, and fun in the team. Get rewarded and recognized for your work and effort. Training and career development benefits. Life Insurance, paid parental leave and vacation days.

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description For Senior Data Engineer Objectives and Purpose The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs. The Senior Data Engineer will: Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your Key Responsibilities Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. To qualify for the role, you must have the following: Essential Skillsets Bachelor's degree in engineering, Computer Science, Data Science, or related field 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines. Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc. Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python, SQL) Distributed Data Framework (e.g., Spark) Cloud platform services (AWS preferred) Relational Databases DevOps and continuous integration AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services Knowledge of Data lakes, Data warehouses Databricks/Delta Lakehouse architecture Code management platforms like Github/ Gitlab/ etc., Understanding of database architecture, Data modelling concepts and administration. Handson experience of Spark Structured Streaming for building real-time ETL pipelines. Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases. Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals. Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions. Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Desired Skillsets Master's degree in engineering specialized in Computer Science, Data Science, or related field Demonstrated understanding and experience using: Knowledge in CDK Experience in IICS Data Integration tool Job orchestration tools like Tidal/Airflow/ or similar Knowledge on No SQL Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous. Databricks Certified Data Engineer Associate AWS Certified Data Engineer - Associate EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description For Senior Data Engineer Objectives and Purpose The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs. The Senior Data Engineer will: Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your Key Responsibilities Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. To qualify for the role, you must have the following: Essential Skillsets Bachelor's degree in engineering, Computer Science, Data Science, or related field 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines. Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc. Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python, SQL) Distributed Data Framework (e.g., Spark) Cloud platform services (AWS preferred) Relational Databases DevOps and continuous integration AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services Knowledge of Data lakes, Data warehouses Databricks/Delta Lakehouse architecture Code management platforms like Github/ Gitlab/ etc., Understanding of database architecture, Data modelling concepts and administration. Handson experience of Spark Structured Streaming for building real-time ETL pipelines. Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases. Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals. Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions. Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Desired Skillsets Master's degree in engineering specialized in Computer Science, Data Science, or related field Demonstrated understanding and experience using: Knowledge in CDK Experience in IICS Data Integration tool Job orchestration tools like Tidal/Airflow/ or similar Knowledge on No SQL Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous. Databricks Certified Data Engineer Associate AWS Certified Data Engineer - Associate EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

Kanayannur, Kerala, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description For Senior Data Engineer Objectives and Purpose The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs. The Senior Data Engineer will: Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your Key Responsibilities Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. To qualify for the role, you must have the following: Essential Skillsets Bachelor's degree in engineering, Computer Science, Data Science, or related field 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines. Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc. Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python, SQL) Distributed Data Framework (e.g., Spark) Cloud platform services (AWS preferred) Relational Databases DevOps and continuous integration AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services Knowledge of Data lakes, Data warehouses Databricks/Delta Lakehouse architecture Code management platforms like Github/ Gitlab/ etc., Understanding of database architecture, Data modelling concepts and administration. Handson experience of Spark Structured Streaming for building real-time ETL pipelines. Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases. Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals. Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions. Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Desired Skillsets Master's degree in engineering specialized in Computer Science, Data Science, or related field Demonstrated understanding and experience using: Knowledge in CDK Experience in IICS Data Integration tool Job orchestration tools like Tidal/Airflow/ or similar Knowledge on No SQL Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous. Databricks Certified Data Engineer Associate AWS Certified Data Engineer - Associate EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description For Senior Data Engineer Objectives and Purpose The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs. The Senior Data Engineer will: Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your Key Responsibilities Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. To qualify for the role, you must have the following: Essential Skillsets Bachelor's degree in engineering, Computer Science, Data Science, or related field 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines. Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc. Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python, SQL) Distributed Data Framework (e.g., Spark) Cloud platform services (AWS preferred) Relational Databases DevOps and continuous integration AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services Knowledge of Data lakes, Data warehouses Databricks/Delta Lakehouse architecture Code management platforms like Github/ Gitlab/ etc., Understanding of database architecture, Data modelling concepts and administration. Handson experience of Spark Structured Streaming for building real-time ETL pipelines. Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases. Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals. Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions. Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Desired Skillsets Master's degree in engineering specialized in Computer Science, Data Science, or related field Demonstrated understanding and experience using: Knowledge in CDK Experience in IICS Data Integration tool Job orchestration tools like Tidal/Airflow/ or similar Knowledge on No SQL Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous. Databricks Certified Data Engineer Associate AWS Certified Data Engineer - Associate EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Job Description For Senior Data Engineer Objectives and Purpose The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs. The Senior Data Engineer will: Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your Key Responsibilities Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes and systems to provide accurate and available data to key stakeholders, downstream systems, and business processes. Partner with Business Analysts and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Mentor and coach junior Data Engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. To qualify for the role, you must have the following: Essential Skillsets Bachelor's degree in engineering, Computer Science, Data Science, or related field 5-9 years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience in building and maintaining data and system integrations using dimensional data modelling and optimized ETL pipelines. Experience in design and developing ETL pipelines using ETL tools like IICS, Datastage, Abinitio, Talend etc. Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python, SQL) Distributed Data Framework (e.g., Spark) Cloud platform services (AWS preferred) Relational Databases DevOps and continuous integration AWS knowledge on services like Lambda, DMS, Step Functions, S3, Event Bridge, Cloud Watch, Aurora RDS or related AWS ETL services Knowledge of Data lakes, Data warehouses Databricks/Delta Lakehouse architecture Code management platforms like Github/ Gitlab/ etc., Understanding of database architecture, Data modelling concepts and administration. Handson experience of Spark Structured Streaming for building real-time ETL pipelines. Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases. Proficient in programming languages (e.g., SQL, Python, Pyspark) to design, develop, maintain, and optimize data architecture/pipelines that fit business goals. Strong organizational skills with the ability to work multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions. Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Desired Skillsets Master's degree in engineering specialized in Computer Science, Data Science, or related field Demonstrated understanding and experience using: Knowledge in CDK Experience in IICS Data Integration tool Job orchestration tools like Tidal/Airflow/ or similar Knowledge on No SQL Proficiency in leveraging the Databricks Unity Catalog for effective data governance and implementing robust access control mechanisms is highly advantageous. Databricks Certified Data Engineer Associate AWS Certified Data Engineer - Associate EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 3 weeks ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

About US At Particleblack, we drive innovation through intelligent experimentation with Artificial Intelligence. Our multidisciplinary team—comprising solution architects, data scientists, engineers, product managers, and designers—collaborates with domain experts to deliver cutting-edge R&D solutions tailored to your business. Our ecosystem empowers rapid execution with plug-and-play tools, enabling scalable, AI-powered strategies that fast-track your digital transformation. With a focus on automation and seamless integration, we help you stay ahead—letting you focus on your core, while we accelerate your growth Responsibilities & Qualifications "• Develop data migration strategies and plans. • Perform ETL operations and data transformation. • Work with cross-functional teams to ensure data consistency and integrity. • Identify and resolve data quality issues. • Document migration processes and best practices. Required Skills: • Expertise in SQL, ETL tools (Informatica, Talend, SSIS, etc.). • Experience in handling large-scale data migrations. • Familiarity with cloud data platforms (AWS, Azure, GCP). Show more Show less

Posted 3 weeks ago

Apply

8.0 - 10.0 years

13 - 17 Lacs

Pune

Work from Office

Naukri logo

Role Purpose The purpose of the role is to create exceptional integration architectural solution design and thought leadership and enable delivery teams to provide exceptional client engagement and satisfaction. Do 1. Define integration architecture for the new deals/ major change requests in existing deals a. Creates an enterprise-wide integration architecture that ensures that systems are seamlessly integrated while being scalable, reliable, and manageable. b. Provide solutioning for digital integration for RFPs received from clients and ensure overall design assurance i. Analyse applications, exchange points, data formats, connectivity requirements, technology environment, enterprise specifics, client requirements to set an integration solution design framework/ architecture ii. Provide technical leadership to the design, development and implementation of integration solutions through thoughtful use of modern technology iii. Define and understand current state integration solutions and identify improvements, options & tradeoffs to define target state solutions iv. Clearly articulate, document and use integration patterns, best practices and processes. v. Evaluate and recommend products and solutions to integrate with overall technology ecosystem vi. Works closely with various IT groups to transition tasks, ensure performance and manage issues through to resolution vii. Document integration architecture covering logical, deployment and data views mentioning all the artefacts in detail viii. Validate the integration solution/ prototype from technology, cost structure and customer differentiation point of view ix. Identify problem areas and perform root cause analysis of integration architectural design and solutions and provide relevant solutions to the problem x. Collaborating with sales, program/project, consulting teams to reconcile solutions to architecture xi. Tracks industry integration trends and relates these to planning current and future IT needs c. Provides technical and strategic input during the project planning phase in the form of technical architectural designs and recommendations d. Collaborates with all relevant parties in order to review the objectives and constraints of solutions and determine conformance with the Enterprise Architecture. e. Identifies implementation risks and potential impacts. 2. Enable Delivery Teams by providing optimal delivery solutions/ frameworks a. Build and maintain relationships with technical leaders, product owners, peer architects and other stakeholders to become a trusted advisor b. Develops and establishes relevant integration metrics (KPI/SLA) to drive results c. Identify risks related to integration and prepares a risk mitigation plan d. Ensure quality assurance of the integration architecture or design decisions and provides technical mitigation support to the delivery teams e. Leads the development and maintenance of integration framework and related artefacts f. Develops trust and builds effective working relationships through respectful, collaborative engagement across individual product teams g. Ensures integration architecture principles, patterns and standards are consistently applied to all the projects h. Ensure optimal Client Engagement i. Support pre-sales team while presenting the entire solution design and its principles to the client ii. Coordinate with the client teams to ensure all requirements are met and create an effective integration solution iii. Demonstrate thought leadership with strong technical capability in front of the client to win the confidence and act as a trusted advisor 3. Competency Building and Branding a. Ensure completion of necessary trainings and certifications on integration middleware b. Develop Proof of Concepts (POCs),case studies, demos etc. for new growth areas and solve new customer problems based on market and customer research c. Develop and present a point of view of Wipro on digital integration by writing white papers, blogs etc. d. Help in attaining market recognition through analyst rankings, client testimonials and partner credits e. Be the voice of Wipro’s Thought Leadership by speaking in forums (internal and external) f. Mentor developers, designers and Junior architects in the project for their further career development and enhancement g. Contribute to the integration practice by conducting selection interviews etc. 4. Team Management a. Resourcing i. Anticipating new talent requirements as per the market/ industry trends or client requirements ii. Support in hiring adequate and right resources for the team through conducting interviews b. Talent Management i. Ensure adequate onboarding and training for the team members to enhance capability & effectiveness c. Performance Management i. Provide inputs to project manager in setting appraisal objectives for the team, conduct timely performance reviews and provide constructive feedback to own direct reports (if present) Deliver No. Performance Parameter Measure 1. Support sales team to create wins % of proposals with Quality Index 7, timely support of the proposals, identifying opportunities/ leads to sell services within/ outside account (lead generation), no. of proposals led 2. Delivery Responsibility in Projects/Programs and Accounts (a) Solution acceptance of Integration architecture (from client and/or internal Wipro architecture leadership), and (b) effective implementation of integration-approach/solution component by way of sufficient integration-design, methods guidelines and tech-know how of team 3. Delivery support CSAT, delivery as per cost, quality and timelines, Identify and develop reusable components, Recommend tools for reuse, automation for improved productivity and reduced cycle times 4. Capability development % trainings and certifications completed, increase in ACE certifications, thought leadership content developed (white papers, Wipro PoVs) Mandatory Skills: Talend DI. Experience8-10 Years.

Posted 3 weeks ago

Apply

5.0 - 8.0 years

3 - 7 Lacs

Bengaluru

Work from Office

Naukri logo

Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLA’s defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers’ and clients’ business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLA’s Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT2Team ManagementProductivity, efficiency, absenteeism3Capability developmentTriages completed, Technical Test performance Mandatory Skills: Snowflake. Experience5-8 Years.

Posted 3 weeks ago

Apply

0.0 - 5.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title:Data Engineer - DBT (Data Build Tool) Experience0-5 Years Location:Bengaluru : Job Responsibilities Assist in the design and implementation of Snowflake-based analytics solution(data lake and data warehouse) on AWS Requirements definition, source data analysis and profiling, the logical and physical design of the data lake and data warehouse as well as the design of data integration and publication pipelines Develop Snowflake deployment and usage best practices Help educate the rest of the team members on the capabilities and limitations of Snowflake Build and maintain data pipelines adhering to suggested enterprise architecture principles and guidelines Design, build, test, and maintain data management systems Work in sync with internal and external team members like data architects, data scientists, data analysts to handle all sorts of technical issue Act as technical leader within the team Working in Agile/Lean model Deliver quality deliverables on time Translating complex functional requirements into technical solutions. EXPERTISE AND QUALIFICATIONS Essential Skills, Education and Experience Should have a B.E. / B.Tech. / MCA or equivalent degree along with 4-7 years of experience in Data Engineering Strong experience in DBT concepts like Model building and configurations, incremental load strategies, macro, DBT tests. Strong experience in SQL Strong Experience in AWS Creation and maintenance of optimum data pipeline architecture for ingestion, processing of data Creation of necessary infrastructure for ETL jobs from a wide range of data sources using Talend, DBT, S3, Snowflake. Experience in Data storage technologies like Amazon S3, SQL, NoSQL Data modeling technical awareness Experience in working with stakeholders working in different time zones Good to have AWS data services development experience. Working knowledge on using Bigdata technologies. Experience in collaborating data quality and data governance team. Exposure to reporting tools like Tableau Apache Airflow, Apache Kafka (nice to have) Payments domain knowledge CRM, Accounting, etc. in depth understanding Regulatory reporting exposure Other skills Good Communication skills Team Player Problem solver Willing to learn new technologies, share your ideas and assist other team members as needed Strong analytical and problem-solving skills; ability to define problems, collect data, establish facts, and draw conclusions.

Posted 3 weeks ago

Apply

10.0 - 15.0 years

1 - 5 Lacs

Bengaluru

Work from Office

Naukri logo

Job Title:SQL, AWS Redshift, PostgreSQL Experience10-15 Years Location:Bangalore : SQL, AWS Redshift, PostgreSQL

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

Remote

Linkedin logo

Job Description Are you ready to make an impact at DTCC? Do you want to work on innovative projects, collaborate with a dynamic and supportive team, and receive investment in your professional development? At DTCC, we are at the forefront of innovation in the financial markets. We're committed to helping our employees grow and succeed. We believe that you have the skills and drive to make a real impact. We foster a thriving internal community and are committed to creating a workplace that looks like the world that we serve. Pay And Benefits Competitive compensation, including base pay and annual incentive Comprehensive health and life insurance and well-being benefits, based on location Paid Time Off and Personal/Family Care, and other leaves of absence when needed to support your physical, financial, and emotional well-being. DTCC offers a flexible/hybrid model of 3 days onsite and 2 days remote (onsite Tuesdays, Wednesdays and a third day unique to each team or employee). The Impact You Will Have In This Role The Enterprise Intelligence Lead will be responsible for building data pipelines using their deep knowledge of Talend, SQL and Data Analysis on the bespoke Snowflake data warehouse for Enterprise Intelligence and will be responsible for managing a growing team of consultants and employees and running a development and production support teams for the Enterprise Intelligence team for DTCC Your Primary Responsibilities Working on and leading engineering and development focused projects from start to finish with minimal supervision Providing technical and operational support for our customer base as well as other technical areas within the company that utilize Claw Risk management functions such as reconciliation of vulnerabilities, security baselines as well as other risk and audit related objectives Ensuring incident, problem and change tickets are addressed in a timely fashion, as well as escalating technical and managerial issues Following DTCC’s ITIL process for incident, change and problem resolution Manage delivery and production support teams Drive delivery independently and autonomously within team and vendor teams Liaise with onshore peers to drive seamless quality of service to stakeholders Conduct working sessions with users and SMEs to align reporting and reduce use of offline spreadsheets and potentially stale data sources Provide technical leadership for projects Work closely with other project managers and scrum masters to create and update project plans Work closely with peers to improve workflow processes & communication Qualifications 8+ years of related experience Bachelor's degree (preferred) or equivalent experience Talents Needed For Success Minimum of 12 years of related experience Minimum of 8 years of experience in managing data warehousing, SQL, Snowflake. Minimum of 5 years of experience in People management & Team Leadership Ability to manage distributed teams with an employee/vendor mix Strong understanding of snowflake schemas and data integration methods and tools Strong knowledge on managing data warehouses in a production environment. This includes all phases of lifecycle management: planning, design, deployment, upkeep and retirement Ability to meet deadlines, goals and objectives Ability to facilitate educational and working sessions with stakeholders and users Self-starter, continually striving to improve the teams service offerings and one’s own skillset Must have a problem-solving and innovative mindset to meet a wide variety of challenges Willingness and ability to learn all aspects of our operating model as well as new tools Developed competencies around essential project management, communication (oral, written) and personal effectiveness Good SQL skills and good knowledge of relational databases, specifically, Snowflake Ability to manage agile development cycles within the DTCC SDLC (SDP) methodology Good knowledge of the technical components of Claw (i.e. Snowflake, Talend, PowerBI, PowerShell, Autosys) Aligns risk and control processes into day to day responsibilities to monitor and mitigate risk; escalates appropriately. Actual salary is determined based on the role, location, individual experience, skills, and other considerations. We are an equal opportunity employer and value diversity at our company. We do not discriminate on the basis of race, religion, color, national origin, sex, gender, gender expression, sexual orientation, age, marital status, veteran status, or disability status. We will ensure that individuals with disabilities are provided reasonable accommodation to participate in the job application or interview process, to perform essential job functions, and to receive other benefits and privileges of employment. Please contact us to request accommodation About Us With over 50 years of experience, DTCC is the premier post-trade market infrastructure for the global financial services industry. From 20 locations around the world, DTCC, through its subsidiaries, automates, centralizes, and standardizes the processing of financial transactions, mitigating risk, increasing transparency, enhancing performance and driving efficiency for thousands of broker/dealers, custodian banks and asset managers. Industry owned and governed, the firm innovates purposefully, simplifying the complexities of clearing, settlement, asset servicing, transaction processing, trade reporting and data services across asset classes, bringing enhanced resilience and soundness to existing financial markets while advancing the digital asset ecosystem. In 2024, DTCC’s subsidiaries processed securities transactions valued at U.S. $3.7 quadrillion and its depository subsidiary provided custody and asset servicing for securities issues from over 150 countries and territories valued at U.S. $99 trillion. DTCC’s Global Trade Repository service, through locally registered, licensed, or approved trade repositories, processes more than 25 billion messages annually. To learn more, please visit us at www.dtcc.com or connect with us on LinkedIn , X , YouTube , Facebook and Instagram . DTCC proudly supports Flexible Work Arrangements favoring openness and gives people freedom to do their jobs well, by encouraging diverse opinions and emphasizing teamwork. When you join our team, you’ll have an opportunity to make meaningful contributions at a company that is recognized as a thought leader in both the financial services and technology industries. A DTCC career is more than a good way to earn a living. It’s the chance to make a difference at a company that’s truly one of a kind. Learn more about Clearance and Settlement by clicking here . About The Team Enterprise Product & Platform Engineering transforms the way we deliver infrastructure to our business clients. A key construct of EP&PE will be the evolution of the IT Product Manager, who will partner with the Engineering organization, the Business Aligned Service Delivery organization, the DevSecOps organization as well as our operational support teams to ensure that this organization provides high quality, commercially attractive and timely solutions to support our business strategy. Show more Show less

Posted 3 weeks ago

Apply

3.0 - 6.0 years

9 - 14 Lacs

Mumbai

Work from Office

Naukri logo

Role Overview : We are looking for aTalend Data Catalog Specialistto drive enterprise data governance initiatives by implementingTalend Data Catalogand integrating it withApache Atlasfor unified metadata management within a Cloudera-based data lakehouse. The role involves establishing metadata lineage, glossary harmonization, and governance policies to enhance trust, discovery, and compliance across the data ecosystem Key Responsibilities: o Set up and configure Talend Data Catalog to ingest and manage metadata from source systems, data lake (HDFS), Iceberg tables, Hive metastore, and external data sources. o Develop and maintain business glossaries , data classifications, and metadata models. o Design and implement bi-directional integration between Talend Data Catalog and Apache Atlas to enable metadata synchronization , lineage capture, and policy alignment across the Cloudera stack. o Map technical metadata from Hive/Impala to business metadata defined in Talend. o Capture end-to-end lineage of data pipelines (e.g., from ingestion in PySpark to consumption in BI tools) using Talend and Atlas. o Provide impact analysis for schema changes, data transformations, and governance rule enforcement. o Support definition and rollout of enterprise data governance policies (e.g., ownership, stewardship, access control). o Enable role-based metadata access , tagging, and data sensitivity classification. o Work with data owners, stewards, and architects to ensure data assets are well-documented, governed, and discoverable. o Provide training to users on leveraging the catalog for search, understanding, and reuse. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 6–12 years in data governance or metadata management, with at least 2–3 years in Talend Data Catalog. Talend Data Catalog, Apache Atlas, Cloudera CDP, Hive/Impala, Spark, HDFS, SQL. Business glossary, metadata enrichment, lineage tracking, stewardship workflows. Hands-on experience in Talend–Atlas integration , either through REST APIs, Kafka hooks, or metadata bridges. Preferred technical and professional experience .

Posted 3 weeks ago

Apply

10.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Area(s) of responsibility Snowflake Architect Snowflake This contract-to-permanent position is perfect for someone passionate about data engineering, cloud platforms, and working in a mission-driven biopharma environment bachelor’s degree in a technical or scientific field. 10+ years in data architecture, engineering, or cloud-based platforms Hands-on experience with Snowflake, AWS/GCP, Talend, Qlik Strong skills in SQL, Python, ETL, data lakes, and warehousing Biopharma experience with regulatory knowledge is a must Solid understanding of AI/ML integration and data observability Excellent communication and stakeholder collaboration skills Snowflake, Talend, or GCP AWS certifications Experience in salesforce Health Cloud Familiarity with Qlik Sense or similar data visualization tools You’ll help build scalable data ecosystems using Snowflake, AWS/GCP, Talend, and Qlik, driving high-impact analytics and AI/ML solutions Design and develop cloud-native data solutions using Snowflake, AWS, and GCP Build efficient ETL/ELT pipelines with Talend, Qlik, Python, and more Enable advanced AI/ML and MLOps workflows for predictive analytics Implement data governance frameworks for quality and compliance (HIPAA, GxP, GDPR) Optimize query performance and automate data ops using Python or JavaScript Work closely with internal stakeholders and external managed services Support regulatory, commercial, and clinical data needs across the enterprise Show more Show less

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

About Persistent We are an AI-led, platform-driven Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world, including 12 of the 30 most innovative global companies, 60% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our disruptor’s mindset, commitment to client success, and agility to thrive in the dynamic environment have enabled us to sustain our growth momentum by reporting $1,409.1M revenue in FY25, delivering 18.8% Y-o-Y growth. Our 23,900+ global team members, located in 19 countries, have been instrumental in helping the market leaders transform their industries. We are also pleased to share that Persistent won in four categories at the prestigious 2024 ISG Star of Excellence™ Awards , including the Overall Award based on the voice of the customer. We were included in the Dow Jones Sustainability World Index, setting high standards in sustainability and corporate responsibility. We were awarded for our state-of-the-art learning and development initiatives at the 16 th TISS LeapVault CLO Awards. In addition, we were cited as the fastest-growing IT services brand in the 2024 Brand Finance India 100 Report. Throughout our market-leading growth, we’ve maintained a strong employee satisfaction score of 8.2/10. At Persistent, we embrace diversity to unlock everyone's potential. Our programs empower our workforce by harnessing varied backgrounds for creative, innovative problem-solving. Our inclusive environment fosters belonging, encouraging employees to unleash their full potential. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. Inclusive Environment We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent - persistent.com/careers Show more Show less

Posted 3 weeks ago

Apply

6.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About Persistent We are an AI-led, platform-driven Digital Engineering and Enterprise Modernization partner, combining deep technical expertise and industry experience to help our clients anticipate what’s next. Our offerings and proven solutions create a unique competitive advantage for our clients by giving them the power to see beyond and rise above. We work with many industry-leading organizations across the world, including 12 of the 30 most innovative global companies, 60% of the largest banks in the US and India, and numerous innovators across the healthcare ecosystem. Our disruptor’s mindset, commitment to client success, and agility to thrive in the dynamic environment have enabled us to sustain our growth momentum by reporting $1,409.1M revenue in FY25, delivering 18.8% Y-o-Y growth. Our 23,900+ global team members, located in 19 countries, have been instrumental in helping the market leaders transform their industries. We are also pleased to share that Persistent won in four categories at the prestigious 2024 ISG Star of Excellence™ Awards , including the Overall Award based on the voice of the customer. We were included in the Dow Jones Sustainability World Index, setting high standards in sustainability and corporate responsibility. We were awarded for our state-of-the-art learning and development initiatives at the 16 th TISS LeapVault CLO Awards. In addition, we were cited as the fastest-growing IT services brand in the 2024 Brand Finance India 100 Report. Throughout our market-leading growth, we’ve maintained a strong employee satisfaction score of 8.2/10. At Persistent, we embrace diversity to unlock everyone's potential. Our programs empower our workforce by harnessing varied backgrounds for creative, innovative problem-solving. Our inclusive environment fosters belonging, encouraging employees to unleash their full potential. For more details please login to www.persistent.com About The Position We are looking for a Big Data Lead who will be responsible for the management of data sets that are too big for traditional database systems to handle. You will create, design, and implement data processing jobs in order to transform the data into a more usable format. You will also ensure that the data is secure and complies with industry standards to protect the company?s information. What You?ll Do Manage customer's priorities of projects and requests Assess customer needs utilizing a structured requirements process (gathering, analyzing, documenting, and managing changes) to prioritize immediate business needs and advising on options, risks and cost Design and implement software products (Big Data related) including data models and visualizations Demonstrate participation with the teams you work in Deliver good solutions against tight timescales Be pro-active, suggest new approaches and develop your capabilities Share what you are good at while learning from others to improve the team overall Show that you have a certain level of understanding for a number of technical skills, attitudes and behaviors Deliver great solutions Be focused on driving value back into the business Expertise You?ll Bring 6 years' experience in designing & developing enterprise application solution for distributed systems Understanding of Big Data Hadoop Ecosystem components (Sqoop, Hive, Pig, Flume) Additional experience working with Hadoop, HDFS, cluster management Hive, Pig and MapReduce, and Hadoop ecosystem framework HBase, Talend, NoSQL databases Apache Spark or other streaming Big Data processing, preferred Java or Big Data technologies, will be a plus Benefits Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. Inclusive Environment We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent - persistent.com/careers Show more Show less

Posted 3 weeks ago

Apply

6.0 - 9.0 years

10 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

This requirement to source profiles with 6-9years of overall experience, including minimum of 4years in Data engineering. Added the below point based on observation Look for combinations with Informatica, IICS , Python,(If not Informatica we can submit with Talend,) Pyspark, SQL , Step Functions, Lambda & EMR on high level exp - Location Hyderabad Key responsibilities and accountabilities Design, build and maintain complex ELT/ETL jobs that deliver business value. Extract, transform and load data from various sources including databases, APIs, and flat files using IICS or Python/SQL. Translate high-level business requirements into technical specs Conduct unit testing, integration testing, and system testing of data integration solutions to ensure accuracy and quality Ingest data from disparate sources into the data lake and data warehouse Cleanse and enrich data and apply adequate data quality controls Provide technical expertise and guidance to team members on Informatica IICS/IDMC and data engineering best practices to guide the future development of MassMutuals Data Platform. Develop re-usable tools to help streamline the delivery of new projects Collaborate closely with other developers and provide mentorship Evaluate and recommend tools, technologies, processes and reference Architectures Work in an Agile development environment, attending daily stand-up meetings and delivering incremental improvements Participate in code reviews and ensure all solutions are lined to architectural and requirement specifications and provide feedback on code quality, design, and performance Knowledge, skills and abilities Please refer ‘Education and Experience’ Education and experience Bachelor’s degree in computer science, engineering, or a related field. Master’s degree preferred. Data: 5+ years of experience with data analytics and data warehousing. Sound knowledge of data warehousing concepts. SQL: 5+ years of hands-on experience on SQL and query optimization for data pipelines. ELT/ETL: 5+ years of experience in Informatica/ 3+ years of experience in IICS/IDMC Migration Experience: Experience Informatica on prem to IICS/IDMC migration Cloud: 5+ years’ experience working in AWS cloud environment Python: 5+ years of hands-on experience of development with Python Workflow: 4+ years of experience in orchestration and scheduling tools (e.g. Apache Airflow) Advanced Data Processing: Experience using data processing technologies such as Apache Spark or Kafka Troubleshooting: Experience with troubleshooting and root cause analysis to determine and remediate potential issues Communication: Excellent communication, problem-solving and organizational and analytical skills Able to work independently and to provide leadership to small teams of developers. Reporting: Experience with data reporting (e.g. MicroStrategy, Tableau, Looker) and data cataloging tools (e.g. Alation) Experience in Design and Implementation of ETL solutions with effective design and optimized performance, ETL Development with industry standard recommendations for jobs recovery, fail over, logging, alerting mechanisms. Specify the minimum acceptable level of education, experience, certifications necessary for the role Application Requirements No special requirements Support Hours India GCC – US (EST) hours overlap for 2-3 hours

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Department: SE DC DG A Snapshot of Your Day We are seeking a highly skilled and committed Manager to lead our data accountability, data security, data regulatory compliance, data access and Artificial Intelligence (AI) governance workstreams. This position requires an initiative-taking leader with a robust background in data governance, data ownership, data security and compliance, who through their team can drive the development and implementation of comprehensive strategies to support our data roles, secure our data assets and uphold ethical AI practices to maintain data integrity and increase a culture of data accountability. How You’ll Make An Impact Data Accountability: Develop and implement policies and procedures to improve data accountability across the organization. the Data Roles & Responsibilities Framework and Help in its operationalization in Siemens Energy. Lead Enablement and Training Programs for the data roles. Support the build of Data communities in the business areas and bring them closer to our formally established data roles. Data Security: Lead the development and execution of comprehensive data security and data access management strategies, policies and procedures which are essential for protecting an organization's data assets, mitigating risks, ensuring compliance, and maintaining collaborator trust. Data Retention: Lead development of retention framework and secure disposal methods to retain data only as vital and dispose of it securely. AI Governance: Lead AI Governance team working on establishing policies and processes to ensure ethical, responsible, and compliant use of AI. Work closely with multi-functional teams of Business, Data domains, Cyber security, Legal and Compliance, Artificial Intelligence, Applications teams etc. Manage and mentor a team of data professionals, providing guidance, training, and support to achieve departmental goals. Develop and implement strategic goals for the team, aligned with organizational objectives. Partner Engagement: Collaborate with collaborators across the organization to align strategies with business objectives. Innovation: Stay abreast of industry trends and emerging technologies and incorporate innovative solutions to enhance data accountability, data security AI governance What You Bring Bachelor’s degree in computer science, information technology, data science, or a related field; master’s degree preferred. Minimum of 8 years of experience in data management, data governance, data ownership and data security governance roles in large scale enterprise set up with at least 3 years in a leadership capacity. Demonstrable ability to develop & implement data governance frameworks. Excellent leadership, communication, and interpersonal skills. Strong analytical, problem-solving, and decision-making abilities. Ability to work effectively in a fast-paced, dynamic environment and manage multiple priorities. Solid understanding of data accountability or data ownership, data security and compliance principles and practices. Familiarity with data governance tools like Collibra, Informatica, or Ataccama, or Talend. Experience with data modeling, data architecture, and database management systems is preferred. About The Team Our Corporate and Global Functions are essential in driving the company's pivotal initiatives and ensuring operational excellence across various groups, business areas, and regions. These roles support our vision to become the most valued energy technology company in the world. As part of our team, you contribute to our vision by shaping the global energy transition, partnering with our internal and external collaborators, and conducting business responsibly and in compliance with legal requirements and regulations. Who is Siemens Energy? At Siemens Energy, we are more than just an energy technology company. With ~100,000 dedicated employees in more than 90 countries, we develop the energy systems of the future, ensuring that the growing energy demand of the global community is met reliably and sustainably. The technologies created in our research departments and factories drive the energy transition and provide the base for one sixth of the world's electricity generation. Our global team is committed to making sustainable, reliable, and affordable energy a reality by pushing the boundaries of what is possible. We uphold a 150-year legacy of innovation that encourages our search for people who will support our focus on decarbonization, new technologies, and energy transformation. Find out how you can make a difference at Siemens Energy: https://www.siemens-energy.com/employeevideo Our Commitment to Diversity Lucky for us, we are not all the same. Through diversity we generate power. We run on inclusion and our combined creative energy is fueled by over 130 nationalities. Siemens Energy celebrates character – no matter what ethnic background, gender, age, religion, identity, or disability. We energize society, all of society, and we do not discriminate based on our differences. Rewards/Benefits Opportunities to work with a distributed team Opportunities to work on and lead a variety of innovative projects Medical benefits Time off/Paid holidays and parental leave Continual learning through the Learn@Siemens-Energy platform https://jobs.siemens-energy.com/jobs Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Department : SE DC DG A Snapshot of Your Day We are looking for a highly skilled and experienced manager to oversee our Enterprise data quality, Master Data Management & Reference Data Management workstreams. This role will ensure the development and implementation of comprehensive strategies for Siemens Energy´s data to be well maintained, accurate, and reliable through effective data quality and data management. The ideal candidate will possess strong leadership skills, a deep understanding of data governance principles, and the ability to drive data quality initiatives across the organization. This position will require close collaboration with various groups to identify, address, and resolve data-related issues while establishing robust data quality and data management frameworks. How You’ll Make An Impact Lead and mentor a team of data management professionals, providing guidance and fostering a collaborative environment. Develop and implement strategic goals for the team, aligned with organizational objectives. Ensure continuous professional development and performance management of team members. Offer strategic guidance and assistance to different data domains in Siemens Energy and collaborate effectively to enhance the implementation of the data governance framework. Oversee the establishment and enforcement of data quality standards and procedures. Support implementation of data quality metrics and reporting mechanisms to monitor and improve data accuracy and consistency across domains. Collaborate with multi-functional teams to identify data quality issues and their impact on business processes. Drive and support the design and delivery of the projects to address data quality issues. Provide thought leadership and industry standard methodologies on the right architecture, tool and governance model. Work on new ideas like using Data Quality for Artificial Intelligence, designing value Quantification approaches, data monetization opportunities etc. Develop and manage the organization's master data & reference data strategy, standards and processes to ensure consistency, accuracy, and reliability of master data across all business systems and support the integration and synchronization of reference data across various platforms. Work with business to standardize and harmonize master data definitions and processes and identify and document master data and reference data entities. Provide thought leadership on the right tools and approaches needed for data management in SE to make us future proof. What You Bring Bachelor’s degree in information technology, Data Science, Business Administration, or a related field (Master’s degree preferred). Minimum of 8 years of experience in data management and data quality roles, in large scale enterprise set up with at least 3 years in a leadership capacity. Proven track record of developing and implementing enterprise data quality, data management strategies and initiatives. Excellent leadership, communication, and interpersonal skills. Strong analytical, problem-solving, and decision-making abilities. Ability to work effectively in a fast-paced, dynamic environment and manage multiple priorities. Solid understanding of data quality, master data, and reference data management principles and practices. Proficiency with data management tools and technologies, such as SQL, ETL tools, data governance platforms, and MDM solutions like Collibra, Informatica, or Ataccama, or Talend. Experience with data modeling, data architecture, and database management systems is preferred. About The Team Our Corporate and Global Functions are essential in driving the company's pivotal initiatives and ensuring operational excellence across various groups, business areas, and regions. These roles support our vision to become the most valued energy technology company in the world. As part of our team, you contribute to our vision by shaping the global energy transition, partnering with our internal and external collaborators, and conducting business responsibly and in compliance with legal requirements and regulations. Who is Siemens Energy? At Siemens Energy, we are more than just an energy technology company. With ~100,000 dedicated employees in more than 90 countries, we develop the energy systems of the future, ensuring that the growing energy demand of the global community is met reliably and sustainably. The technologies created in our research departments and factories drive the energy transition and provide the base for one sixth of the world's electricity generation. Our global team is committed to making sustainable, reliable, and affordable energy a reality by pushing the boundaries of what is possible. We uphold a 150-year legacy of innovation that encourages our search for people who will support our focus on decarbonization, new technologies, and energy transformation. Find out how you can make a difference at Siemens Energy: https://www.siemens-energy.com/employeevideo Our Commitment to Diversity Lucky for us, we are not all the same. Through diversity we generate power. We run on inclusion and our combined creative energy is fueled by over 130 nationalities. Siemens Energy celebrates character – no matter what ethnic background, gender, age, religion, identity, or disability. We energize society, all of society, and we do not discriminate based on our differences. Rewards/Benefits Opportunities to work with a distributed team Opportunities to work on and lead a variety of innovative projects Medical benefits Time off/Paid holidays and parental leave Continual learning through the Learn@Siemens-Energy platform https://jobs.siemens-energy.com/jobs Show more Show less

Posted 3 weeks ago

Apply

30.0 years

0 Lacs

Pune, Maharashtra, India

Remote

Linkedin logo

Company Description With 30 years of experience, Stefanini Group is a global company providing a wide range of services including automation, cloud, IoT, and user experience (UX). Our portfolio combines innovative consulting, marketing, mobility, AI services with traditional solutions like service desk, field service, and BPO. Stefanini Group excels through investments in technology, strong partnerships, global acquisitions, and the hiring of highly trained professionals. We value innovation, new ideas, and recognize the importance of every talent in driving our progress. Role Description This is a full-time hybrid role based in Pune, with some work from home flexibility, for a Talend Administrator. The Talend Administrator will be responsible for managing Talend environments, performing installations, upgrades, and patches, and ensuring the optimal performance of Talend solutions. Additional responsibilities include monitoring, troubleshooting, and optimizing Talend processes and workflows, collaboration with development teams, and maintaining data quality and integrity. Key Responsibilities & Skills Install and configure Talend Studio 8.0 and apply necessary patches and updates. Install JDK 8 and JDK 17 on Talend virtual machines as required. Install and configure Talend Remote Engine (RE) application services on Linux servers. Perform runtime installation and setup on Talend servers. Apply patches on Talend servers and disable automatic updates in Talend Studio. Monitor Talend services and jobs, including disk space and system alerts. Manage the start and shutdown processes of Talend Remote Engine services. Perform general troubleshooting and health checks for the Talend platform. Provide user access management in Talend Management Console. Triage and support Talend-related issues, including: Opening and managing support cases with Talend Support. Resolving Talend Studio connectivity issues (locally and via Citrix). Assisting developers with issues and providing detailed root cause analysis. Collaborate with developers and platform teams to maintain high availability and performance of T Show more Show less

Posted 3 weeks ago

Apply

8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Department : SE DC DG A Snapshot of Your Day We are looking for a highly skilled and experienced manager to oversee our Enterprise data quality, Master Data Management & Reference Data Management workstreams. This role will ensure the development and implementation of comprehensive strategies for Siemens Energy´s data to be well maintained, accurate, and reliable through effective data quality and data management. The ideal candidate will possess strong leadership skills, a deep understanding of data governance principles, and the ability to drive data quality initiatives across the organization. This position will require close collaboration with various groups to identify, address, and resolve data-related issues while establishing robust data quality and data management frameworks. How You’ll Make An Impact Lead and mentor a team of data management professionals, providing guidance and fostering a collaborative environment. Develop and implement strategic goals for the team, aligned with organizational objectives. Ensure continuous professional development and performance management of team members. Offer strategic guidance and assistance to different data domains in Siemens Energy and collaborate effectively to enhance the implementation of the data governance framework. Oversee the establishment and enforcement of data quality standards and procedures. Support implementation of data quality metrics and reporting mechanisms to monitor and improve data accuracy and consistency across domains. Collaborate with multi-functional teams to identify data quality issues and their impact on business processes. Drive and support the design and delivery of the projects to address data quality issues. Provide thought leadership and industry standard methodologies on the right architecture, tool and governance model. Work on new ideas like using Data Quality for Artificial Intelligence, designing value Quantification approaches, data monetization opportunities etc. Develop and manage the organization's master data & reference data strategy, standards and processes to ensure consistency, accuracy, and reliability of master data across all business systems and support the integration and synchronization of reference data across various platforms. Work with business to standardize and harmonize master data definitions and processes and identify and document master data and reference data entities. Provide thought leadership on the right tools and approaches needed for data management in SE to make us future proof. What You Bring Bachelor’s degree in information technology, Data Science, Business Administration, or a related field (Master’s degree preferred). Minimum of 8 years of experience in data management and data quality roles, in large scale enterprise set up with at least 3 years in a leadership capacity. Proven track record of developing and implementing enterprise data quality, data management strategies and initiatives. Excellent leadership, communication, and interpersonal skills. Strong analytical, problem-solving, and decision-making abilities. Ability to work effectively in a fast-paced, dynamic environment and manage multiple priorities. Solid understanding of data quality, master data, and reference data management principles and practices. Proficiency with data management tools and technologies, such as SQL, ETL tools, data governance platforms, and MDM solutions like Collibra, Informatica, or Ataccama, or Talend. Experience with data modeling, data architecture, and database management systems is preferred. About The Team Our Corporate and Global Functions are essential in driving the company's pivotal initiatives and ensuring operational excellence across various groups, business areas, and regions. These roles support our vision to become the most valued energy technology company in the world. As part of our team, you contribute to our vision by shaping the global energy transition, partnering with our internal and external collaborators, and conducting business responsibly and in compliance with legal requirements and regulations. Who is Siemens Energy? At Siemens Energy, we are more than just an energy technology company. With ~100,000 dedicated employees in more than 90 countries, we develop the energy systems of the future, ensuring that the growing energy demand of the global community is met reliably and sustainably. The technologies created in our research departments and factories drive the energy transition and provide the base for one sixth of the world's electricity generation. Our global team is committed to making sustainable, reliable, and affordable energy a reality by pushing the boundaries of what is possible. We uphold a 150-year legacy of innovation that encourages our search for people who will support our focus on decarbonization, new technologies, and energy transformation. Find out how you can make a difference at Siemens Energy: https://www.siemens-energy.com/employeevideo Our Commitment to Diversity Lucky for us, we are not all the same. Through diversity we generate power. We run on inclusion and our combined creative energy is fueled by over 130 nationalities. Siemens Energy celebrates character – no matter what ethnic background, gender, age, religion, identity, or disability. We energize society, all of society, and we do not discriminate based on our differences. Rewards/Benefits Opportunities to work with a distributed team Opportunities to work on and lead a variety of innovative projects Medical benefits Time off/Paid holidays and parental leave Continual learning through the Learn@Siemens-Energy platform https://jobs.siemens-energy.com/jobs Show more Show less

Posted 3 weeks ago

Apply

Exploring Talend Jobs in India

Talend is a popular data integration and management tool used by many organizations in India. As a result, there is a growing demand for professionals with expertise in Talend across various industries. Job seekers looking to explore opportunities in this field can expect a promising job market in India.

Top Hiring Locations in India

  1. Bangalore
  2. Pune
  3. Hyderabad
  4. Mumbai
  5. Delhi

These cities have a high concentration of IT companies and organizations that frequently hire for Talend roles.

Average Salary Range

The average salary range for Talend professionals in India varies based on experience levels: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum

Career Path

A typical career progression in the field of Talend may follow this path: - Junior Developer - Developer - Senior Developer - Tech Lead - Architect

As professionals gain experience and expertise, they can move up the ladder to more senior and leadership roles.

Related Skills

In addition to expertise in Talend, professionals in this field are often expected to have knowledge or experience in the following areas: - Data Warehousing - ETL (Extract, Transform, Load) processes - SQL - Big Data technologies (e.g., Hadoop, Spark)

Interview Questions

  • What is Talend and how does it differ from traditional ETL tools? (basic)
  • Can you explain the difference between tMap and tJoin components in Talend? (medium)
  • How do you handle errors in Talend jobs? (medium)
  • What is the purpose of a context variable in Talend? (basic)
  • Explain the difference between incremental and full loading in Talend. (medium)
  • How do you optimize Talend jobs for better performance? (advanced)
  • What are the different deployment options available in Talend? (medium)
  • How do you schedule Talend jobs to run at specific times? (basic)
  • Can you explain the use of tFilterRow component in Talend? (basic)
  • What is metadata in Talend and how is it used? (medium)
  • How do you handle complex transformations in Talend? (advanced)
  • Explain the concept of schema in Talend. (basic)
  • How do you handle duplicate records in Talend? (medium)
  • What is the purpose of the tLogRow component in Talend? (basic)
  • How do you integrate Talend with other systems or applications? (medium)
  • Explain the use of tNormalize component in Talend. (medium)
  • How do you handle null values in Talend transformations? (basic)
  • What is the role of the tRunJob component in Talend? (medium)
  • How do you monitor and troubleshoot Talend jobs? (medium)
  • Can you explain the difference between tMap and tMapLookup components in Talend? (medium)
  • How do you handle changing business requirements in Talend jobs? (advanced)
  • What are the best practices for version controlling Talend jobs? (advanced)
  • How do you handle large volumes of data in Talend? (medium)
  • Explain the purpose of the tAggregateRow component in Talend. (basic)

Closing Remark

As you explore opportunities in the Talend job market in India, remember to showcase your expertise, skills, and knowledge during the interview process. With preparation and confidence, you can excel in securing a rewarding career in this field. Good luck!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies