Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
3.0 - 5.0 years
0 Lacs
Pune, Maharashtra, India
On-site
The HiLabs Story HiLabs is a leading provider of AI-powered solutions to clean dirty data, unlocking its hidden potential for healthcare transformation. HiLabs is committed to transforming the healthcare industry through innovation, collaboration, and a relentless focus on improving patient outcomes. HiLabs Team Multidisciplinary industry leaders Healthcare domain experts AI/ML and data science experts Professionals hailing from the worlds best universities, business schools, and engineering institutes including Harvard, Yale, Carnegie Mellon, Duke, Georgia Tech, Indian Institute of Management (IIM), and Indian Institute of Technology (IIT). Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer I/II Job Location : Bangalore, Karnataka , India Job summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Desired Profile Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application. HiLabs Total Rewards Competitive Salary, Accelerated Incentive Policies, H1B sponsorship, Comprehensive benefits package that includes ESOPs, financial contribution for your ongoing professional and personal development, medical coverage for you and your loved ones, 401k, PTOs & a collaborative working environment, Smart mentorship, and highly qualified multidisciplinary, incredibly talented professionals from highly renowned and accredited medical schools, business schools, and engineering institutes. CCPA disclosure notice - https://www.hilabs.com/privacy Show more Show less
Posted 1 week ago
0.0 - 4.0 years
0 Lacs
Chennai, Tamil Nadu
Remote
Senior ETL Developer (Talend + PostgreSQL) – Immediate Joiner Preferred Experience: 5-8 years Project : Us client based project Remote or hybrid We are looking for an experienced and proactive ETL Developer with 5–8 years of hands-on experience in Talend and PostgreSQL , who can contribute individually and guide a team in managing and optimizing data workflows. The ideal candidate will also support and collaborate with peers using a tool referred to as Quilt or Quilt Talend . Key Responsibilities: ETL Development : Design, build, and maintain scalable ETL pipelines using Talend. Data Integration : Seamlessly integrate structured and unstructured data from diverse sources. PostgreSQL Expertise : Strong experience in PostgreSQL for data warehousing, performance tuning, indexing, and large dataset operations. Team Guidance : Act as a technical lead to guide junior developers and ensure best practices in ETL processes. Tool Expertise (Quilt Talend or Quilt Tool) : Support team members in using Quilt —a platform used to manage and version data for ML and analytics pipelines. Linux & Scripting : Write automation scripts in Linux for batch processing and monitoring. AWS Cloud Integration : Experience integrating Talend with AWS services such as S3, RDS (PostgreSQL), Glue, or Redshift. Troubleshooting : Proactively identify bottlenecks or issues in ETL jobs and ensure data accuracy and uptime. Collaboration : Work closely with data analysts, scientists, and stakeholders to deliver end-to-end solutions. Must-Have Skills: Strong knowledge of Talend (Open Studio / Data Integration / Big Data Edition) . 3+ years of hands-on experience with PostgreSQL . Familiarity with Quilt Data Tool (https://quiltdata.com/) or similar data versioning tools. Solid understanding of cloud ETL environments, especially AWS . Strong communication and leadership skills. Nice-to-Have: Familiarity with Oracle for legacy systems. Knowledge of data governance and security best practices. Experience integrating Talend with APIs or external services. Additional Info: Location : [Chennai, Madurai/Tamil Nadu / Remote / Hybrid] Joining : Immediate joiners preferred Job Type : Full-time / Contract Job Types: Full-time, Contractual / Temporary Pay: ₹700,000.00 - ₹1,200,000.00 per year Benefits: Work from home Schedule: Evening shift Monday to Friday Rotational shift US shift Weekend availability Application Question(s): Are you willing to work as hybrid from Chennai or Madurai Experience: EDL : 4 years (Required) Location: Chennai, Tamil Nadu (Preferred) Shift availability: Night Shift (Required) Overnight Shift (Required) Work Location: In person
Posted 1 week ago
5.0 years
5 - 5 Lacs
Chennai
On-site
Job Information Date Opened 06/09/2025 Job Type Full time Industry Technology Work Experience 5+ years City Chennai State/Province Tamil Nadu Country India Zip/Postal Code 600096 Job Description What you’ll be working on: Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity. Collaborates with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Build the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. Work with stakeholders including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Learn something new everyday What we are looking for: Bachelor's or master’s degree in technical or business discipline or related experience; Master's Degree preferred. 4+ years hands-on experience effectively managing data platforms, data tools and/or depth in data management technologies Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases. Experience building and optimizing ‘big data’ data pipelines, architectures and data sets. Experience performing root cause analysis on internal and external data and processes to answer specific business questions and identify opportunities for improvement. Experience with orchestration tools like Airflow. Experience with any of the ETL tools like Talend, Informatica etc. Experience in Data Warehouse solutions like Snowflake,Redshift. Exposure to data visualization tools (Tableau, Sisense, Looker, Metabase etc.) Knowledge of Github, JIRA is a plus. Familiar with data warehouse & data governance Experience developing software code in one or more programming languages (Java, JavaScript, Python, etc) is a plus. Requirements Knowledge/Skills/Abilities/Behaviours: A “build-test-measure-improve” mentality and are driven to motivate and lead teams to achieve impactful deliverables Passion for operational efficiency, quantitative performance metrics and process orientation Working knowledge of project planning methodologies, IT standards and guidelines. Customer passion, business focus and the ability to negotiate, facilitate and build consensus. The ability to promote a team environment across a large set of separate agile teams and stakeholders Experience with or knowledge of Agile Software Development methodologies Benefits Work at SquareShift: We offer a stimulating atmosphere where your efforts will have a significant impact on our company’s success. We are a fun, client focussed, results-driven company that centers on developing high quality software, not work schedules and dress codes. We are driven by people who have a passion for technology, innovation and we are committed to continuous improvement. This role excites you to join our team? Apply on the link below!
Posted 1 week ago
8.0 - 11.0 years
6 - 9 Lacs
Noida
On-site
Snowflake - Senior Technical Lead Full-time Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. The world is how we shape it. Job Description Position: Snowflake - Senior Technical Lead Experience: 8-11 years Location: Noida/ Bangalore Education: B.E./ B.Tech./ MCA Primary Skills: Snowflake, Snowpipe, SQL, Data Modelling, DV 2.0, Data Quality, AWS, Snowflake Security Good to have Skills: Snowpark, Data Build Tool, Finance Domain Preferred Skills Experience with Snowflake-specific features: Snowpipe, Streams & Tasks, Secure Data Sharing. Experience in data warehousing, with at least 2 years focused on Snowflake. Hands-on expertise in SQL, Snowflake scripting (JavaScript UDFs), and Snowflake administration. Proven experience with ETL/ELT tools (e.g., dbt, Informatica, Talend, Matillion) and orchestration frameworks. Deep knowledge of data modeling techniques (star schema, data vault) and performance tuning. Familiarity with data security, compliance requirements, and governance best practices. Experience in Python, Scala, or Java for Snowpark development. Strong understanding of cloud platforms (AWS, Azure, or GCP) and related services (S3, ADLS, IAM) Key Responsibilities Define data partitioning, clustering, and micro-partition strategies to optimize performance and cost. Lead the implementation of ETL/ELT processes using Snowflake features (Streams, Tasks, Snowpipe). Automate schema migrations, deployments, and pipeline orchestration (e.g., with dbt, Airflow, or Matillion). Monitor query performance and resource utilization; tune warehouses, caching, and clustering. Implement workload isolation (multi-cluster warehouses, resource monitors) for concurrent workloads. Define and enforce role-based access control (RBAC), masking policies, and object tagging. Ensure data encryption, compliance (e.g., GDPR, HIPAA), and audit logging are correctly configured. Establish best practices for dimensional modeling, data vault architecture, and data quality. Create and maintain data dictionaries, lineage documentation, and governance standards. Partner with business analysts and data scientists to understand requirements and deliver analytics-ready datasets. Stay current with Snowflake feature releases (e.g., Snowpark, Native Apps) and propose adoption strategies. Contribute to the long-term data platform roadmap and cloud cost-optimization initiatives. Qualifications BTech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities.
Posted 1 week ago
6.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Location: Chennai Experience: 2–6 years Employment Type: Full-time Job Description: We are looking for skilled and passionate Developers with experience in Talend , Qlik Sense , and Snowflake to join our growing team. You will be responsible for developing data integration workflows, building dashboards, and supporting data analytics solutions. Key Responsibilities: Design, develop, and maintain ETL processes using Talend Create interactive dashboards and reports using Qlik Sense Develop and optimize data pipelines and queries in Snowflake Collaborate with cross-functional teams to understand data needs Ensure data quality, performance, and reliability Requirements: Hands-on experience with Talend , Qlik Sense , and Snowflake Strong SQL skills Good understanding of data warehousing and BI concepts Ability to troubleshoot and optimize data processes Excellent communication and teamwork skills Nice to Have: Experience in cloud platforms (AWS, Azure, or GCP) Knowledge of data security best practices 📩 Apply Now: chandralega@fipsar.com 🌐 www.fipsar.com Show more Show less
Posted 1 week ago
6.0 - 10.0 years
3 - 8 Lacs
Noida
Work from Office
Position: Snowflake - Senior Technical Lead Experience: 8-11 years Location: Noida/ Bangalore Education: B.E./ B.Tech./ MCA Primary Skills: Snowflake, Snowpipe, SQL, Data Modelling, DV 2.0, Data Quality, AWS, Snowflake Security Good to have Skills: Snowpark, Data Build Tool, Finance Domain Preferred Skills Experience with Snowflake-specific features: Snowpipe, Streams & Tasks, Secure Data Sharing. Experience in data warehousing, with at least 2 years focused on Snowflake. Hands-on expertise in SQL, Snowflake scripting (JavaScript UDFs), and Snowflake administration. Proven experience with ETL/ELT tools (e.g., dbt, Informatica, Talend, Matillion) and orchestration frameworks. Deep knowledge of data modeling techniques (star schema, data vault) and performance tuning. Familiarity with data security, compliance requirements, and governance best practices. Experience in Python, Scala, or Java for Snowpark development. Strong understanding of cloud platforms (AWS, Azure, or GCP) and related services (S3, ADLS, IAM) Key Responsibilities Define data partitioning, clustering, and micro-partition strategies to optimize performance and cost. Lead the implementation of ETL/ELT processes using Snowflake features (Streams, Tasks, Snowpipe). Automate schema migrations, deployments, and pipeline orchestration (e.g., with dbt, Airflow, or Matillion). Monitor query performance and resource utilization; tune warehouses, caching, and clustering. Implement workload isolation (multi-cluster warehouses, resource monitors) for concurrent workloads. Define and enforce role-based access control (RBAC), masking policies, and object tagging. Ensure data encryption, compliance (e.g., GDPR, HIPAA), and audit logging are correctly configured. Establish best practices for dimensional modeling, data vault architecture, and data quality. Create and maintain data dictionaries, lineage documentation, and governance standards. Partner with business analysts and data scientists to understand requirements and deliver analytics-ready datasets. Stay current with Snowflake feature releases (e.g., Snowpark, Native Apps) and propose adoption strategies. Contribute to the long-term data platform roadmap and cloud cost-optimization initiatives.
Posted 1 week ago
8.0 years
0 Lacs
Ahmedabad, Gujarat, India
On-site
About Analytix: Businesses of all sizes are faced with a rapidly changing competitive environment. Companies that possess both the ability to successfully navigate obstacles and the agility to react to market conditions are better positioned for long-term success. Analytix Solutions helps your company tackle these types of challenges. We empower business owners to confidently make informed decisions and positively impact profitability. We are a single-source provider of integrated solutions across multiple functional areas and disciplines. Through a combination of cross-disciplinary expertise, technological aptitude, and deep domain experience, we support our clients with efficient systems and processes, reliable data, and industry insights. We are your partner in strategically scaling your business for growth Website- Small Business Accounting Bookkeeping, Medical Billing, Audio Visual, IT Outsourcing Services (analytix.com) LinkedIn : Analytix Solutions: Overview | LinkedIn Analytix Business Solutions (India) Pvt. Ltd. : My Company | LinkedIn Job Description: Design, build, and maintain scalable, high-performance data pipelines and ETL/ELT processes across diverse database platforms. Architect and optimize data storage solutions to ensure reliability, security, and scalability. Leverage generative AI tools and models to enhance data engineering workflows, drive automation, and improve insight generation. Create and maintain comprehensive documentation for data systems, workflows, and models. Implement data modeling best practices and optimize data retrieval processes for better performance. Qualifications : Bachelor's or Master’s degree in Computer Science, Information Technology, or a related field. 5–8 years of experience in data engineering, designing and managing large-scale data systems. Strong expertise in database technologies, including: - SQL Databases: PostgreSQL, MySQL, SQL Server , NoSQL Databases: MongoDB, Cassandra , Data Warehouse/ Unified Platforms: Snowflake, Redshift, BigQuery, Microsoft Fabric Proficiency in Python and SQL, with experience in data processing frameworks (e.g., Pandas, PySpark). Experience with ETL tools (e.g., Apache Airflow, MS Fabric, Informatica, Talend) and data pipeline orchestration platforms. Strong understanding of data architecture, data modeling, and data governance principles. Experience with cloud platforms (preferably Azure) and associated data services. Show more Show less
Posted 1 week ago
8.0 - 11.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Company Description About Sopra Steria Sopra Steria, a major Tech player in Europe with 50,000 employees in nearly 30 countries, is recognised for its consulting, digital services and solutions. It helps its clients drive their digital transformation and obtain tangible and sustainable benefits. The Group provides end-to-end solutions to make large companies and organisations more competitive by combining in-depth knowledge of a wide range of business sectors and innovative technologies with a collaborative approach. Sopra Steria places people at the heart of everything it does and is committed to putting digital to work for its clients in order to build a positive future for all. In 2024, the Group generated revenues of €5.8 billion. Job Description The world is how we shape it. Position: Snowflake - Senior Technical Lead Experience: 8-11 years Location: Noida/ Bangalore Education: B.E./ B.Tech./ MCA Primary Skills: Snowflake, Snowpipe, SQL, Data Modelling, DV 2.0, Data Quality, AWS, Snowflake Security Good to have Skills: Snowpark, Data Build Tool, Finance Domain Preferred Skills Experience with Snowflake-specific features: Snowpipe, Streams & Tasks, Secure Data Sharing. Experience in data warehousing, with at least 2 years focused on Snowflake. Hands-on expertise in SQL, Snowflake scripting (JavaScript UDFs), and Snowflake administration. Proven experience with ETL/ELT tools (e.g., dbt, Informatica, Talend, Matillion) and orchestration frameworks. Deep knowledge of data modeling techniques (star schema, data vault) and performance tuning. Familiarity with data security, compliance requirements, and governance best practices. Experience in Python, Scala, or Java for Snowpark development. Strong understanding of cloud platforms (AWS, Azure, or GCP) and related services (S3, ADLS, IAM) Key Responsibilities Define data partitioning, clustering, and micro-partition strategies to optimize performance and cost. Lead the implementation of ETL/ELT processes using Snowflake features (Streams, Tasks, Snowpipe). Automate schema migrations, deployments, and pipeline orchestration (e.g., with dbt, Airflow, or Matillion). Monitor query performance and resource utilization; tune warehouses, caching, and clustering. Implement workload isolation (multi-cluster warehouses, resource monitors) for concurrent workloads. Define and enforce role-based access control (RBAC), masking policies, and object tagging. Ensure data encryption, compliance (e.g., GDPR, HIPAA), and audit logging are correctly configured. Establish best practices for dimensional modeling, data vault architecture, and data quality. Create and maintain data dictionaries, lineage documentation, and governance standards. Partner with business analysts and data scientists to understand requirements and deliver analytics-ready datasets. Stay current with Snowflake feature releases (e.g., Snowpark, Native Apps) and propose adoption strategies. Contribute to the long-term data platform roadmap and cloud cost-optimization initiatives. Qualifications BTech/MCA Additional Information At our organization, we are committed to fighting against all forms of discrimination. We foster a work environment that is inclusive and respectful of all differences. All of our positions are open to people with disabilities. Show more Show less
Posted 1 week ago
1.0 - 2.0 years
0 Lacs
Thane, Maharashtra, India
On-site
Job Requirements Role/ Job Title: Analyst-Data governance Business: Data & Analytics Function/ Department: Data & Analytics Place of Work: Mumbai Job Purpose Senior data Analyst (DG) will work within Data & Analytics Office to implement data governance framework with a focus on improvement of data quality, standards, metrics, processes. Align data management practices with regulatory requirements. Understanding of lineage – How the data is produced, managed and consumed within the Banks business process and system. . Roles & Responsibilities Design and implement data quality rules, Monitoring mechanism Analyze data quality issue and collaborate with business stakeholders to address the issue resolution, Build recovery model across Enterprise. knowledge of DG technologies for data quality and metadata management (Ovaledge, Talend, Collibra etc) Support in development of Centralized Metadata repositories (Business glossary, technical metadata etc), Captures business/Data quality rules and design DQ reports & Dashboards Minimum 1 to 2 years of experience in Data governance Key Success Metrics Successful implementation of DQ framework across business line. Show more Show less
Posted 1 week ago
1.0 years
0 Lacs
Bengaluru, Karnataka, India
Remote
As part of the Astellas commitment to delivering value for our patients, our organization is currently undergoing transformation to achieve this critical goal. This is an opportunity to work on digital transformation and make a real impact within a company dedicated to improving lives. DigitalX our new information technology function is spearheading this value driven transformation across Astellas. We are looking for people who excel in embracing change, manage technical challenges and have exceptional communication skills. This position is based in Bengaluru and will require some on-site work. Purpose And Scope As a Junior Data Engineer, you will play a crucial role in assisting in the design, build, and maintenance of our data infrastructure focusing on BI and DWH capabilities. Working with the Senior Data Engineer, your foundational expertise in BI, Databricks, PySpark, SQL, Talend and other related technologies, will be instrumental in driving data-driven decision-making across the organization. You will play a pivotal role in building maintaining and enhancing our systems across the organization. This is a fantastic global opportunity to use your proven agile delivery skills across a diverse range of initiatives, utilize your development skills, and contribute to the continuous improvement/delivery of critical IT solutions. Essential Job Responsibilities Collaborate with FoundationX Engineers to design and maintain scalable data systems. Assist in building robust infrastructure using technologies like PowerBI, Qlik or alternative, Databricks, PySpark, and SQL. Contribute to ensuring system reliability by incorporating accurate business-driving data. Gain experience in BI engineering through hands-on projects. Data Modelling and Integration: Collaborate with cross-functional teams to analyse requirements and create technical designs, data models, and migration strategies. Design, build, and maintain physical databases, dimensional data models, and ETL processes specific to pharmaceutical data. Cloud Expertise: Evaluate and influence the selection of cloud-based technologies such as Azure, AWS, or Google Cloud. Implement data warehousing solutions in a cloud environment, ensuring scalability and security. BI Expertise: Leverage and create PowerBI, Qlik or equivalent technology for data visualization, dashboards, and self-service analytics. Data Pipeline Development: Design, build, and optimize data pipelines using Databricks and PySpark. Ensure data quality, reliability, and scalability. Application Transition: Support the migration of internal applications to Databricks (or equivalent) based solutions. Collaborate with application teams to ensure a seamless transition. Mentorship and Leadership: Lead and mentor junior data engineers. Share best practices, provide technical guidance, and foster a culture of continuous learning. Data Strategy Contribution: Contribute to the organization’s data strategy by identifying opportunities for data-driven insights and improvements. Participate in smaller focused mission teams to deliver value driven solutions aligned to our global and bold move priority initiatives and beyond. Design, develop and implement robust and scalable data analytics using modern technologies. Collaborate with cross functional teams and practises across the organisation including Commercial, Manufacturing, Medical, DataX, GrowthX and support other X (transformation) Hubs and Practices as appropriate, to understand user needs and translate them into technical solutions. Provide Technical Support to internal users troubleshooting complex issues and ensuring system uptime as soon as possible. Champion continuous improvement initiatives identifying opportunities to optimise performance security and maintainability of existing data and platform architecture and other technology investments. Participate in the continuous delivery pipeline. Adhering to DevOps best practises for version control automation and deployment. Ensuring effective management of the FoundationX backlog. Leverage your knowledge of data engineering principles to integrate with existing data pipelines and explore new possibilities for data utilization. Stay-up to date on the latest trends and technologies in data engineering and cloud platforms. Qualifications Required Bachelor's degree in computer science, Information Technology, or related field (master’s preferred) or equivalent experience 1-3+ years of experience in data engineering with a strong understanding of BI technologies, PySpark and SQL, building data pipelines and optimization. 1-3 +years + experience in data engineering and integration tools (e.g., Databricks, Change Data Capture) 1-3+ years + experience of utilizing cloud platforms (AWS, Azure, GCP). A deeper understanding/certification of AWS and Azure is considered a plus. Experience with relational and non-relational databases. Any relevant cloud-based integration certification at foundational level or above. (Any QLIK or BI certification, AWS certified DevOps engineer, AWS Certified Developer, Any Microsoft Certified Azure qualification, Proficient in RESTful APIs, AWS, CDMP, MDM, DBA, SQL, SAP, TOGAF, API, CISSP, VCP or any relevant certification) Experience in MuleSoft (Anypoint platform, its components, Designing and managing API-led connectivity solutions). Experience in AWS (environment, services and tools), developing code in at least one high level programming language. Experience with continuous integration and continuous delivery (CI/CD) methodologies and tools Experience with Azure services related to computing, networking, storage, and security Understanding of cloud integration patterns and Azure integration services such as Logic Apps, Service Bus, and API Management Preferred Subject Matter Expertise: possess a strong understanding of data architecture/ engineering/operations/ reporting within Life Sciences/ Pharma industry across Commercial, Manufacturing and Medical domains. Other complex and highly regulated industry experience will be considered across diverse areas like Commercial, Manufacturing and Medical. Data Analysis and Automation Skills: Proficient in identifying, standardizing, and automating critical reporting metrics and modelling tools Analytical Thinking: Demonstrated ability to lead ad hoc analyses, identify performance gaps, and foster a culture of continuous improvement. Technical Proficiency: Strong coding skills in SQL, R, and/or Python, coupled with expertise in machine learning techniques, statistical analysis, and data visualization. Agile Champion: Adherence to DevOps principles and a proven track record with CI/CD pipelines for continuous delivery. Working Environment At Astellas we recognize the importance of work/life balance, and we are proud to offer a hybrid working solution allowing time to connect with colleagues at the office with the flexibility to also work from home. We believe this will optimize the most productive work environment for all employees to succeed and deliver. Hybrid work from certain locations may be permitted in accordance with Astellas’ Responsible Flexibility Guidelines. \ Category FoundationX Astellas is committed to equality of opportunity in all aspects of employment. EOE including Disability/Protected Veterans Show more Show less
Posted 1 week ago
125.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Na Roche, você pode-se apresentar como você mesmo, abraçado pelas qualidades únicas que traz. Nossa cultura incentiva a expressão pessoal, o diálogo aberto e as conexões genuínas, onde você é valorizado e respeitado por quem você é, e permitindo que você prospere tanto pessoal como profissionalmente. É assim que pretendemos prevenir, deter e curar doenças e garantir que todos tenham acesso aos cuidados de saúde hoje e nas gerações futuras. Junte-se à Roche, onde cada voz é importante. A posição In Roche Informatics, we build on Roche's 125-year history as one of the world's largest biotech companies, globally recognized for providing transformative innovative solutions across major disease areas. We combine human capabilities with cutting-edge technological innovations to do now what our patients need next. Our commitment to our patients' needs motivates us to deliver technology that evolves the practice of medicine. Be part of our inclusive team at Roche Informatics, where we're driven by a shared passion for technological novelties and optimal IT solutions. About The Position Data Engineer, who will work closely with multi-disciplinary and multi-cultural teams to build structured, high-quality data solutions. The person may be leading technical squads. These solutions will be leveraged across Enterprise , Pharma and Diagnostics solutions to help our teams fulfill our mission: to do now what patients need next. In this position, you will require hands-on expertise in ETL pipeline development, data engineering. You should also be able to provide direction and guidance to developers, oversee the development and unit testing, as well as document the developed solution. Building strong customer relationships for ongoing business is also a key aspect of this role. To succeed in this position, you should have experience with Cloud-based Data Solution Architectures, the Software Development Life Cycle (including both Agile and waterfall methodologies), Data Engineering and ETL tools/platforms, and data modeling practices. Your Key Responsibilities Building and optimizing data ETL pipelines to support data analytics Developing and implementing data integrations with other systems and platforms Maintaining documentation for data pipelines and related processes Logical and physical modeling of datasets and applications Making Roche data assets accessible and findable across the organization Explore new ways of building, processing, and analyzing data in order to deliver insights to our business partners Continuously refine data quality with testing, tooling and performance evaluation Work with business and functional stakeholders to understand data requirements and downstream analytics needs. Partner with business to ensure appropriate integration of functions to meet goals as well as identify and define necessary system enhancements to deploy new products and process improvements. Foster a data-driven culture throughout the team and lead data engineering projects that will have an impact throughout the organization. Work with data and analytics experts to strive for greater functionality in our data systems and products; and help to grow our data team with exceptional engineers. Your Qualifications And Experience Education in related fields (Computer Science, Computer Engineering, Mathematical Engineering, Information Systems) or job experience preferably within multiple Data Engineering technologies. 4+ years experience with ETL development, data engineering and data quality assurance. Good Experience on Snowflake and its features. Hands on experience as Data Engineering in Cloud Data Solutions using Snowflake. Experienced working with Cloud Platform Services (AWS/Azure/GCP). Experienced in ETL/ETL technologies like Talend/DBT or other ETL platforms. Experience in preparing and reviewing new data flows patterns. Excellent Python Skills Strong RDBMS concepts and SQL development skills Strong focus on data pipelines automation Exposure in quality assurance and data quality activities are an added advantage. DevOps/ DataOps experience (especially Data operations preferred) Readiness to work with multiple tech domains and streams Passionate about new technologies and experimentation Experience with Inmuta and Montecarlo is a plus What You Get Good and stable working environment with attractive compensation and rewards package (according to local regulations); Annual bonus payment based on performance; Access to various internal and external training platforms (e.g. Linkedin Learning); Experienced and professional colleagues and workplace that supports innovation; Multiple Savings Plans with Employer Match Company’s emphasis on employees’ wellness and work-life balance ( (e.g. generous vacation days and OneRoche Wellness Days ), Workplace flexibility policy; State of art working environment and facilities; And many more that the Talent Acquisition Partner will be happy to talk about! Quem nós somos Um futuro mais saudável nos leva a inovar. Juntos, mais de 100 mil funcionários em todo o mundo se dedicam ao avanço da ciência, garantindo que todos tenham acesso à saúde hoje e nas próximas gerações. Nossos esforços resultam em mais de 26 milhões de pessoas tratadas com nossos medicamentos e mais de 30 bilhões de testes realizados usando nossos produtos de diagnóstico. Nós nos capacitamos para explorar novas possibilidades, promover a criatividade e manter as nossas ambições altas, para fornecer soluções de saúde que mudem a vida e causem um impacto global. Vamos construir juntos um futuro mais saudável. A Roche é um empregador que pratica políticas de igualdade de oportunidades. Show more Show less
Posted 1 week ago
6.0 - 10.0 years
8 - 12 Lacs
Gurugram
Work from Office
About The Role : Role Purpose Data Analyst, Data Modeling, Data Pipeline, ETL Process, Tableau, SQL, Snowflake. Do Strong expertise in data modeling, data warehousing, and ETL processes. - Proficient in SQL and experience with data warehousing tools (e.g., Snowflake, Redshift, BigQuery) and ETL tools (e.g., Talend, Informatica, SSIS). - Demonstrated ability to lead and manage complex projects involving cross-functional teams. - Excellent analytical, problem-solving, and organizational skills. - Strong communication and leadership abilities, with a track record of mentoring and developing team members. - Experience with data visualization tools (e.g., Tableau, Power BI) is a plus. - Preference to candidates with experience in ETL using Python, Airflow or DBT Build capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Partner with team leaders to brainstorm and identify training themes and learning issues to better serve the client Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback2Self- ManagementProductivity, efficiency, absenteeism, Training Hours, No of technical training completed
Posted 1 week ago
5.0 - 10.0 years
2 - 5 Lacs
Bengaluru
Work from Office
Contract duration 6 month Experience 5 + years Location WFH ( should have good internet connection ) Snowflake knowledge (Must have) Autonomous person SQL Knowledge (Must have) Data modeling (Must have) Datawarehouse concepts and DW design best practices (Must have) SAP knowledge (Good to have) SAP functional knowledge (Good to have) Informatica IDMC (Good to have) Good Communication skills, Team player, self-motivated and work ethics Flexibility in working hours12pm Central time (overlap with US team ) Confidence, proactiveness and demonstrate alternatives to mitigate tools/expertise gaps(fast learner).
Posted 1 week ago
4.0 - 9.0 years
20 - 25 Lacs
Hyderabad
Work from Office
We are seeking a skilled and experienced Cognos TM1 Developer with a strong background in ETL processes and Python development. The ideal candidate will be responsible for designing, developing, and supporting TM1 solutions, integrating data pipelines, and automating processes using Python. This role requires strong problem-solving skills, business acumen, and the ability to work collaboratively with cross-functional teams Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 4+ years of hands-on experience with IBM Cognos TM1 / Planning Analytics. Strong knowledge of TI processes, rules, dimensions, cubes, and TM1 Web. Proven experience in building and managing ETL pipelines (preferably with tools like Informatica, Talend, or custom scripts). Proficiency in Python programming for automation, data processing, and system integration. Experience with REST APIs, JSON/XML data formats, and data extraction from external sources Preferred technical and professional experience strong SQL knowledge and ability to work with relational databases. Familiarity with Agile methodologies and version control systems (e.g., Git). 3.Excellent analytical, problem-solving, and communication skills
Posted 1 week ago
2.0 - 5.0 years
6 - 10 Lacs
Pune
Work from Office
As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You’ll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you’ll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you’ll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization. Strong experience in building data ingestion and transformation pipelines using Talend to process structured and unstructured data from various sources. Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake capabilities. Hands-on experience with dimensional and relational data modelling techniques to support analytics and reporting requirements Preferred technical and professional experience Understanding of optimizing Snowflake workloads, including clustering keys, caching strategies, and query profiling. Ability to implement robust data validation, cleansing, and governance frameworks within ETL processes. Proficiency in SQL and/or Shell scripting for custom transformations and automation tasks
Posted 1 week ago
2.0 - 5.0 years
6 - 10 Lacs
Pune
Work from Office
As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You’ll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you’ll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you’ll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviours. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization. Strong experience in building data ingestion and transformation pipelines using Talend to process structured and unstructured data from various sources. Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake capabilities. Hands-on experience with dimensional and relational data modelling techniques to support analytics and reporting requirements Preferred technical and professional experience Understanding of optimizing Snowflake workloads, including clustering keys, caching strategies, and query profiling. Ability to implement robust data validation, cleansing, and governance frameworks within ETL processes. Proficiency in SQL and/or Shell scripting for custom transformations and automation tasks
Posted 1 week ago
2.0 - 5.0 years
7 - 11 Lacs
Pune
Work from Office
As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You’ll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you’ll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you’ll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization. Strong experience in building data ingestion and transformation pipelines using Talend to process structured and unstructured data from various sources. Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake capabilities. Hands-on experience with dimensional and relational data modelling techniques to support analytics and reporting requirements. Understanding of optimizing Snowflake workloads, including clustering keys, caching strategies, and query profiling. Ability to implement robust data validation, cleansing, and governance frameworks within ETL processes. Proficiency in SQL and/or Shell scripting for custom transformations and automation tasks Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Tableau Desktop Specialist, SQL -Strong understanding of SQL for Querying database Good to have - Python ; Snowflake, Statistics, ETL experience. Extensive knowledge on using creating impactful visualization using Tableau. Must have thorough understanding of SQL & advance SQL (Joining & Relationships) Preferred technical and professional experience Must have experience in working with different databases and how to blend & create relationships in Tableau. Must have extensive knowledge to creating Custom SQL to pull desired data from databases. Troubleshooting capabilities to debug Data controls. Capable of converting business requirements into workable model. Good communication skills, willingness to learn new technologies, Team Player, Self-Motivated, Positive Attitude.
Posted 1 week ago
10.0 - 15.0 years
5 - 9 Lacs
Mumbai
Work from Office
Role Overview : We are hiring aTalend Data Quality Developerto design and implement robust data quality (DQ) frameworks in a Cloudera-based data lakehouse environment. The role focuses on building rule-driven validation and monitoring processes for migrated data pipelines, ensuring high levels of data trust and regulatory compliance across critical banking domains. Key Responsibilities : Design and implement data quality rules using Talend DQ Studio , tailored to validate customer, account, transaction, and KYC datasets within the Cloudera Lakehouse. Create reusable templates for profiling, validation, standardization, and exception handling. Integrate DQ checks within PySpark-based ingestion and transformation pipelines targeting Apache Iceberg tables . Ensure compatibility with Cloudera components (HDFS, Hive, Iceberg, Ranger, Atlas) and job orchestration frameworks (Airflow/Oozie). Perform initial and ongoing data profiling on source and target systems to detect data anomalies and drive rule definitions. Monitor and report DQ metrics through dashboards and exception reports. Work closely with data governance, architecture, and business teams to align DQ rules with enterprise definitions and regulatory requirements. Support lineage and metadata integration with tools like Apache Atlas or external catalogs. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience 5–10 years in data management, with 3+ years in Talend Data Quality tools. Platforms Experience in Cloudera Data Platform (CDP) , with understanding of Iceberg , Hive , HDFS , and Spark ecosystems. Languages/Tools Talend Studio (DQ module), SQL, Python (preferred), Bash scripting. Data Concepts Strong grasp of data quality dimensions—completeness, consistency, accuracy, timeliness, uniqueness. Banking Exposure Experience with financial services data (CIF, AML, KYC, product masters) is highly preferred.
Posted 1 week ago
2.0 - 5.0 years
6 - 10 Lacs
Pune
Work from Office
As Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You’ll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you’ll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you’ll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise search applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviour’s. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modelling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Expertise in designing and implementing scalable data warehouse solutions on Snowflake, including schema design, performance tuning, and query optimization. Strong experience in building data ingestion and transformation pipelines using Talend to process structured and unstructured data from various sources. Proficiency in integrating data from cloud platforms into Snowflake using Talend and native Snowflake capabilities. Hands-on experience with dimensional and relational data modelling techniques to support analytics and reporting requirements Preferred technical and professional experience Understanding of optimizing Snowflake workloads, including clustering keys, caching strategies, and query profiling. Ability to implement robust data validation, cleansing, and governance frameworks within ETL processes. Proficiency in SQL and/or Shell scripting for custom transformations and automation tasks
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Website http://www.intraedge.com Job Title: Data Analyst – Data Quality & Cost Allocation Location: Chennai Job Type: Full Time Job Summary: We are seeking a detail-oriented Data Analyst to join our team and drive improvements in data quality, consistency, and reliability. This role involves defining data validation processes, conducting manual data reviews, aligning data from various providers, and supporting infrastructure and cost allocation analysis. The ideal candidate has a strong understanding of data management practices, cost allocation methodologies, and infrastructure inventory or capacity data. Key Responsibilities: Define and implement data validation processes to ensure data integrity across multiple systems and sources. Perform manual data reviews to identify discrepancies, inconsistencies, and outliers. Review and align data sources with data providers, ensuring consistency and standardization. Ensure adherence to data quality standards , including accuracy, completeness, timeliness, and consistency. Conduct gap analyses and recommend scalable solutions to improve data quality and reduce manual intervention. Collaborate with cross-functional teams including IT, Finance, and Operations to identify data issues and opportunities for improvement. Design and maintain reporting dashboards and metrics to track data quality and cost allocation accuracy. Analyze infrastructure inventory and capacity data to support cost allocation decisions and optimization efforts. Document data lineage and quality processes , and contribute to the development of data governance frameworks. Required Qualifications: Bachelor's degree in Data Science, Information Systems, Business, Finance, or a related field. 3+ years of experience in data analysis, data quality, or data governance roles. Solid experience with data validation techniques and data profiling . Working knowledge of cost allocation methodologies and infrastructure/inventory capacity data. Proficient in data tools such as Excel, SQL, Power BI/Tableau , and data quality tools (e.g., Talend, Informatica). Strong analytical, problem-solving, and critical thinking skills. Ability to work independently and manage multiple priorities in a dynamic environment. Excellent communication skills with the ability to liaise between technical and non-technical stakeholders. Thanks & Regards, Divya Dixit Recruitment Lead Show more Show less
Posted 1 week ago
5.0 - 9.0 years
15 - 20 Lacs
Hyderabad
Hybrid
About Us: Our global community of colleagues bring a diverse range of experiences and perspectives to our work. You'll find us working from a corporate office or plugging in from a home desk, listening to our customers and collaborating on solutions. Our products and solutions are vital to businesses of every size, scope and industry. And at the heart of our work youll find our core values: to be data inspired, relentlessly curious and inherently generous. Our values are the constant touchstone of our community; they guide our behavior and anchor our decisions. Designation: Software Engineer II Location: Hyderabad KEY RESPONSIBILITIES Design, build, and deploy new data pipelines within our Big Data Eco-Systems using Streamsets/Talend/Informatica BDM etc. Document new/existing pipelines, Datasets. Design ETL/ELT data pipelines using StreamSets, Informatica or any other ETL processing engine. Familiarity with Data Pipelines, Data Lakes and modern Data Warehousing practices (virtual data warehouse, push down analytics etc.) Expert level programming skills on Python Expert level programming skills on Spark Cloud Based Infrastructure: GCP Experience with one of the ETL Informatica, StreamSets in creation of complex parallel loads, Cluster Batch Execution and dependency creation using Jobs/Topologies/Workflows etc., Experience in SQL and conversion of SQL stored procedures into Informatica/StreamSets, Strong exposure working with web service origins/targets/processors/executors, XML/JSON Sources and Restful APIs. Strong exposure working with relation databases DB2, Oracle & SQL Server including complex SQL constructs and DDL generation. Exposure to Apache Airflow for scheduling jobs Strong knowledge of Big data Architecture (HDFS), Cluster installation, configuration, monitoring, cluster security, cluster resources management, maintenance, and performance tuning Create POCs to enable new workloads and technical capabilities on the Platform. Work with the platform and infrastructure engineers to implement these capabilities in production. Manage workloads and enable workload optimization including managing resource allocation and scheduling across multiple tenants to fulfill SLAs. Participate in planning activities, Data Science and perform activities to increase platform skills KEY Requirements Minimum 6 years of experience in ETL/ELT Technologies, preferably StreamSets/Informatica/Talend etc., Minimum of 6 years hands-on experience with Big Data technologies e.g. Hadoop, Spark, Hive. Minimum 3+ years of experience on Spark Minimum 3 years of experience in Cloud environments, preferably GCP Minimum of 2 years working in a Big Data service delivery (or equivalent) roles focusing on the following disciplines: Any experience with NoSQL and Graph databases Informatica or StreamSets Data integration (ETL/ELT) Exposure to role and attribute based access controls Hands on experience with managing solutions deployed in the Cloud, preferably on GCP Experience working in a Global company, working in a DevOps model is a plus Dun & Bradstreet is an Equal Opportunity Employer and all qualified applicants will receive consideration for employment without regard to race, color, religion, creed, sex, age, national origin, citizenship status, disability status, sexual orientation, gender identity or expression, pregnancy, genetic information, protected military and veteran status, ancestry, marital status, medical condition (cancer and genetic characteristics) or any other characteristic protected by law. We are committed to Equal Employment Opportunity and providing reasonable accommodations to qualified candidates and employees. If you are interested in applying for employment with Dun & Bradstreet and need special assistance or an accommodation to use our website or to apply for a position, please send an e-mail with your requesttoacquisitiont@dnb.com Determinationon requests for reasonable accommodation are made on a case-by-case basis.
Posted 1 week ago
10.0 years
5 - 10 Lacs
Bengaluru
On-site
Location: Bangalore - Karnataka, India - EOIZ Industrial Area Worker Type Reference: Regular - Permanent Pay Rate Type: Salary Career Level: T4(A) Job ID: R-45392-2025 Description & Requirements Introduction: A Career at HARMAN HARMAN Technology Services (HTS) We’re a global, multi-disciplinary team that’s putting the innovative power of technology to work and transforming tomorrow. At HARMAN DTS, you solve challenges by creating innovative solutions. Combine the physical and digital, making technology a more dynamic force to solve challenges and serve humanity’s needs Work at the convergence of cross channel UX, cloud, insightful data, IoT and mobility Empower companies to create new digital business models, enter new markets, and improve customer experiences About the Role We are seeking an experienced “Azure Data Architect” who will develop and implement data engineering project including enterprise data hub or Big data platform. Develop and implement data engineering project including data lake house or Big data platform What You Will Do Create data pipelines for more efficient and repeatable data science projects Design and implement data architecture solutions that support business requirements and meet organizational needs Collaborate with stakeholders to identify data requirements and develop data models and data flow diagrams Work with cross-functional teams to ensure that data is integrated, transformed, and loaded effectively across different platforms and systems Develop and implement data governance policies and procedures to ensure that data is managed securely and efficiently Develop and maintain a deep understanding of data platforms, technologies, and tools, and evaluate new technologies and solutions to improve data management processes Ensure compliance with regulatory and industry standards for data management and security. Develop and maintain data models, data warehouses, data lakes and data marts to support data analysis and reporting. Ensure data quality, accuracy, and consistency across all data sources. Knowledge of ETL and data integration tools such as Informatica, Qlik Talend, and Apache NiFi. Experience with data modeling and design tools such as ERwin, PowerDesigner, or ER/Studio Knowledge of data governance, data quality, and data security best practices Experience with cloud computing platforms such as AWS, Azure, or Google Cloud Platform. Familiarity with programming languages such as Python, Java, or Scala. Experience with data visualization tools such as Tableau, Power BI, or QlikView. Understanding of analytics and machine learning concepts and tools. Knowledge of project management methodologies and tools to manage and deliver complex data projects. Skilled in using relational database technologies such as MySQL, PostgreSQL, and Oracle, as well as NoSQL databases such as MongoDB and Cassandra. Strong expertise in cloud-based databases such as AWS 3/ AWS glue , AWS Redshift, Iceberg/parquet file format Knowledge of big data technologies such as Hadoop, Spark, snowflake, databricks , and Kafka to process and analyze large volumes of data. Proficient in data integration techniques to combine data from various sources into a centralized location. Strong data modeling, data warehousing, and data integration skills. What You Need 10+ years of experience in the information technology industry with strong focus on Data engineering, architecture and preferably as data engineering lead 8+ years of data engineering or data architecture experience in successfully launching, planning, and executing advanced data projects. Experience in working on RFP/ proposals, presales activities, business development and overlooking delivery of Data projects is highly desired A master’s or bachelor’s degree in computer science, data science, information systems, operations research, statistics, applied mathematics, economics, engineering, or physics. Candidate should have demonstrated the ability to manage data projects and diverse teams. Should have experience in creating data and analytics solutions. Experience in building solutions with Data solutions in any one or more domains – Industrial, Healthcare, Retail, Communication Problem-solving, communication, and collaboration skills. Good knowledge of data visualization and reporting tools Ability to normalize and standardize data as per Key KPIs and Metrics Develop and implement data engineering project including data lakehouse or Big data platform Develop and implement data engineering project including data lakehouse or Big data platform What is Nice to Have Knowledge of Azure Purview is must Knowledge of Azure Data fabric Ability to define reference data architecture Snowflake Certified in SnowPro Advanced Certification Ability to define reference data architecture Cloud native data platform experience in AWS or Microsoft stack Knowledge about latest data trends including datafabric and data mesh Robust knowledge of ETL and data transformation and data standardization approaches Key contributor on growth of the COE and influencing client revenues through Data and analytics solutions Lead the selection, deployment, and management of Data tools, platforms, and infrastructure. Ability to guide technically a team of data engineers Oversee the design, development, and deployment of Data solutions Define, differentiate & strategize new Data services/offerings and create reference architecture assets Drive partnerships with vendors on collaboration, capability building, go to market strategies, etc. Guide and inspire the organization about the business potential and opportunities around Data Network with domain experts Collaborate with client teams to understand their business challenges and needs. Develop and propose Data solutions tailored to client specific requirements. Influence client revenues through innovative solutions and thought leadership. Lead client engagements from project initiation to deployment. Build and maintain strong relationships with key clients and stakeholders Build re-usable Methodologies, Pipelines & Models What Makes You Eligible Build and manage a high-performing team of Data engineers and other specialists. Foster a culture of innovation and collaboration within the Data team and across the organization. Demonstrate the ability to work in diverse, cross-functional teams in a dynamic business environment. Candidates should be confident, energetic self-starters, with strong communication skills. Candidates should exhibit superior presentation skills and the ability to present compelling solutions which guide and inspire. Provide technical guidance and mentorship to the Data team Collaborate with other stakeholders across the company to align the vision and goals Communicate and present the Data capabilities and achievements to clients and partners Stay updated on the latest trends and developments in the Data domain What We Offer Access to employee discounts on world class HARMAN/Samsung products (JBL, Harman Kardon, AKG etc.). Professional development opportunities through HARMAN University’s business and leadership academies. An inclusive and diverse work environment that fosters and encourages professional and personal development. “Be Brilliant” employee recognition and rewards program. You Belong Here HARMAN is committed to making every employee feel welcomed, valued, and empowered. No matter what role you play, we encourage you to share your ideas, voice your distinct perspective, and bring your whole self with you – all within a support-minded culture that celebrates what makes each of us unique. We also recognize that learning is a lifelong pursuit and want you to flourish. We proudly offer added opportunities for training, development, and continuing education, further empowering you to live the career you want. About HARMAN: Where Innovation Unleashes Next-Level Technology Ever since the 1920s, we’ve been amplifying the sense of sound. Today, that legacy endures, with integrated technology platforms that make the world smarter, safer, and more connected. Across automotive, lifestyle, and digital transformation solutions, we create innovative technologies that turn ordinary moments into extraordinary experiences. Our renowned automotive and lifestyle solutions can be found everywhere, from the music we play in our cars and homes to venues that feature today’s most sought-after performers, while our digital transformation solutions serve humanity by addressing the world’s ever-evolving needs and demands. Marketing our award-winning portfolio under 16 iconic brands, such as JBL, Mark Levinson, and Revel, we set ourselves apart by exceeding the highest engineering and design standards for our customers, our partners and each other. If you’re ready to innovate and do work that makes a lasting impact, join our talent community today! Important Notice: Recruitment Scams Please be aware that HARMAN recruiters will always communicate with you from an '@harman.com' email address. We will never ask for payments, banking, credit card, personal financial information or access to your LinkedIn/email account during the screening, interview, or recruitment process. If you are asked for such information or receive communication from an email address not ending in '@harman.com' about a job with HARMAN, please cease communication immediately and report the incident to us through: harmancareers@harman.com. HARMAN is proud to be an Equal Opportunity / Affirmative Action employer. All qualified applicants will receive consideration for employment without regard to race, religion, color, national origin, gender (including pregnancy, childbirth, or related medical conditions), sexual orientation, gender identity, gender expression, age, status as a protected veteran, status as an individual with a disability, or other applicable legally protected characteristics.
Posted 1 week ago
5.0 years
0 Lacs
Bengaluru
On-site
Job Description Position Overview We are seeking a highly skilled and experienced Data Architect with expertise in cloud-based solutions. The ideal candidate will design, implement, and optimize our data architecture to meet the organization's current and future needs. This role requires a strong background in data modeling, transformation, and governance, along with hands-on experience with modern cloud platforms and tools such as Snowflake, Spark, Data Lakes, and Data Warehouses. The successful candidate will also establish and enforce standards and guidelines across data platforms to ensure consistency, scalability, and best practices. Exceptional communication skills are essential to collaborate across cross-functional teams and stakeholders. Key Responsibilities Design and Implementation: Architect and implement scalable, secure, and high-performance cloud data platforms, integrating data lakes, data warehouses, and databases. Develop comprehensive data models to support analytics, reporting, and operational needs. Data Integration and Transformation: Lead the design and execution of ETL/ELT pipelines using tools like, Talend / Matillion, SQL, BigData, Hadoop, AWS EMR, Apache Spark to process and transform data efficiently. Integrate diverse data sources into cohesive and reusable datasets for business intelligence and machine learning purposes. Standards and Guidelines: Establish, document, and enforce standards and guidelines for data architecture, Data modeling, transformation, and governance across all data platforms. Ensure consistency and best practices in data storage, integration, and security throughout the organization. Data Governance: Establish and enforce data governance standards, ensuring data quality, security, and compliance with regulatory requirements. Implement processes and tools to manage metadata, lineage, and data access controls. Cloud Expertise: Utilize Snowflake for advanced analytics and data storage needs, ensuring optimized performance and cost efficiency. Leverage modern cloud platforms to manage data lakes and ensure seamless integration with other services. Collaboration and Communication: Partner with business stakeholders, data engineers, and analysts to gather requirements and translate them into technical designs. Clearly communicate architectural decisions, trade-offs, and progress to both technical and non-technical audiences. Continuous Improvement: Stay updated on emerging trends in cloud and data technologies, recommending innovations to enhance the organization’s data capabilities. Optimize existing architectures to improve scalability, performance, and maintainability. Qualifications Technical Skills: Strong expertise in data modeling (conceptual, logical, physical) and data architecture design principles. Proficiency in Talend / Matillion, SQL, BigData, Hadoop, AWS EMR, Apache Spark, Snowflake, and cloud-based data platforms. Experience with data lakes, data warehouses, and relational and NoSQL databases. Experience with relational(PGSQL/Oracle) / NoSQL(Couchbase/Cassandra) databases Solid understanding of data transformation techniques and ETL/ELT pipelines. Proficiency in DevOps / DataOps / MLOps tools. Standards and Governance: Experience establishing and enforcing data platform standards, guidelines, and governance frameworks. Proven ability to align data practices with business goals and regulatory compliance. Communication: Exceptional written and verbal communication skills to interact effectively with technical teams and business stakeholders. Experience: 5+ years of experience in data architecture, with a focus on cloud technologies. Proven track record of delivering scalable, cloud-based data solutions. Education: Bachelor's or Master's degree in Computer Science, Information Systems, or a related field. Preferred Qualifications Certification in Snowflake, AWS data services, Any RDBMS / NoSQL, AI/ML, Data Governance. Familiarity with machine learning workflows and data pipelines. Experience working in Agile development environments. Job Type: Full-time Schedule: Monday to Friday Night shift Rotational shift Work Location: In person
Posted 1 week ago
0 years
0 Lacs
Chennai
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. T24 BA_Data Migration - Senior Manager Key words: T24 Data Migration, Requirements Gathering, Stakeholder Management, Gap Analysis, As-Is / To-Be Analysis, Data Mapping, ETL Processes, Data Quality & Validation SQL Data Modelling, Data Transformation, Source-to-Target Mapping, Data Reconciliation, Data Cleansing, Migration Tools (e.g., Informatica, Talend, Microsoft SSIS, SAP Data Services) Job Summary : Data Migration Business Analyst will lead the end-to-end analysis and execution of data migration initiatives across complex enterprise systems. This role demands deep expertise in data migration strategies, strong analytical capabilities, and a proven ability to work with cross-functional teams, including IT, business stakeholders, and data architects. Will be responsible for defining migration requirements, leading data mapping and reconciliation efforts, ensuring data integrity, and supporting transformation programs from legacy systems to modern platforms. As a senior leader, you will also play a critical role in stakeholder engagement, risk mitigation, and aligning data migration efforts with broader business objectives. Mandatory requirements: Selected candidates should be willing to work out of client location in Chennai for 5 days a week Roles and Responsibilities: T24 professionals with expertise and prior work experience on one or more of T24 products – Retail, Corporate, Internet Banking, Mobile banking, Wealth management, Payment suites Well versed in Technical aspects of the product and experienced in Data Migration activities Good understanding of the T24 architecture, administration, configuration, data structure. Technical: Design and Development experience in Infobasic, Core Java, EJB and J2EE Enterprise. Working experience and/or Knowledge of INFORMATICA In-depth experience in End-to-End Migration tasks, right from Migration strategy, ETL process, data reconciliation Experience in Relational or hierarchical databases including Oracle, DB2, Postgres, MySQL, and MSSQL. Working knowledge in one more of the functional areas such as Core, retail, Corporate, Securities Lending, Asset management, Compliance, AML - Including product parametrisation and set up. Show in-depth knowledge on best banking practices and T24 modules like Private Banking, Securities, Accounting, combined with good understanding of GL Ability to handle crisis and steer the team in the right direction. Excellent documentation skills in migration stream - Data migration strategy, finalising data mapping, Data profiling/cleansing, ETL process, Data reconciliation Excellent business communication skills Other skills include Effort Estimation, Pre-sales support, engagement Assessments, Project planning, Conduct Training for clients/Internal staff Good Leadership skills Excellent client-facing skills MBA/MCA/ BE / B.Tech/ equivalent with a sound industry experience of 9 to 12 Yrs Your client responsibilities: Need to work as a team lead in one or more T24 projects. Interface and communicate with the onsite coordinators Completion of assigned tasks on time and regular status reporting to the lead Regular status reporting to the Manager and onsite coordinators Interface with the customer representatives as and when needed Should be ready to travel to customers locations on need basis Your people responsibilities Building a quality culture Manage the performance management for the direct reportees, as per the organization policies Foster teamwork and lead by example Training and mentoring of project resources Participating in the organization-wide people initiatives Preferred skills: Database administration Performance tuning Prior Client facing experience EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.
Posted 1 week ago
0 years
0 - 0 Lacs
India
On-site
Data Collection: Gathering data from various sources, including databases, surveys, and other relevant platforms. Data Cleaning and Preparation: Ensuring data accuracy, consistency, and completeness by identifying and addressing errors, missing values, and inconsistencies. Data Analysis: Applying statistical methods, data mining techniques, and other analytical tools to identify trends, patterns, and relationships within data sets. Data Visualization: Creating charts, graphs, and other visual representations to communicate findings and insights to stakeholders. Reporting and Presentation: Preparing reports and presentations to communicate findings and recommendations to managers and other stakeholders. Problem Solving: Using data analysis to identify and solve business problems and improve decision-making. Data Security and Compliance: Ensuring data is handled securely and in compliance with relevant regulations. Database Management: Managing and maintaining databases, ensuring data integrity and accessibility. Process Improvement: Identifying and implementing improvements to data collection, analysis, and reporting processes. Tools and Techniques: Data analysts utilize various tools and techniques, including: Statistical software: SAS, SPSS, R. Data visualization tools: Power BI, Tableau. Database management systems: SQL, MySQL, PostgreSQL. Data mining techniques: Clustering, regression, classification. Data warehousing and ETL tools: Informatica, Talend, Pentaho. Programming languages: Python, Java. In essence, data analysts are skilled professionals who can turn data into valuable insights that help organizations make better decisions and achieve their goals. Job Types: Full-time, Permanent, Fresher Pay: ₹18,000.00 - ₹38,000.00 per month Benefits: Provident Fund Schedule: Day shift Work Location: In person
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Talend is a popular data integration and management tool used by many organizations in India. As a result, there is a growing demand for professionals with expertise in Talend across various industries. Job seekers looking to explore opportunities in this field can expect a promising job market in India.
These cities have a high concentration of IT companies and organizations that frequently hire for Talend roles.
The average salary range for Talend professionals in India varies based on experience levels: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-20 lakhs per annum
A typical career progression in the field of Talend may follow this path: - Junior Developer - Developer - Senior Developer - Tech Lead - Architect
As professionals gain experience and expertise, they can move up the ladder to more senior and leadership roles.
In addition to expertise in Talend, professionals in this field are often expected to have knowledge or experience in the following areas: - Data Warehousing - ETL (Extract, Transform, Load) processes - SQL - Big Data technologies (e.g., Hadoop, Spark)
As you explore opportunities in the Talend job market in India, remember to showcase your expertise, skills, and knowledge during the interview process. With preparation and confidence, you can excel in securing a rewarding career in this field. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2