Home
Jobs

310 Data Lake Jobs - Page 5

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 8.0 years

7 - 10 Lacs

Gurugram

Work from Office

Naukri logo

Job Responsibilities Technology Leadership Independently he/she should be able to design, implement deliver complex Data Warehousing/Data Lake, Cloud Data Management and Data Integration project assignments. Technical Design and Development Expertise in any of the following skills. Any of the ETL tools (Informatica, IICS, Matillion, Data Stage), and hosting technologies like the AWS stack (Redshift, EC2) is mandatory. Any of the BI tools among Tablau, Qlik & Power BI and MSTR. Informatica MDM, Customer Data Management. Expert knowledge of SQL with the capability to performance tune complex SQL queries in tradition and distributed RDDMS systems is must. Experience across Python, PySpark and Unix/Linux Shell Scripting. Project Management is must to have. Should be able create simple to complex project plans in Microsoft Project Plan and think in advance about potential risks and mitigation plans as per project plan. Task Management Should be able to onboard team on the project plan and delegate tasks to accomplish milestones as per plan. Should be comfortable in discussing and prioritizing work items with team members in an onshore-offshore model. Handle Client Relationship Manage client communication and client expectations independently or with support of reporting manager. Should be able to deliver results back to the Client as per plan. EducationBachelor Equivalent - OtherPG Diploma in Management

Posted 2 weeks ago

Apply

7.0 - 12.0 years

15 - 30 Lacs

Pune, Chennai

Work from Office

Naukri logo

Data Modeler- Primary Skills: Data modeling (conceptual, logical, physical), relational/dimensional/data vault modeling, ERwin/IBM Infosphere, SQL (Oracle, SQL Server, PostgreSQL, DB2), banking domain data knowledge (Retail, Corporate, Risk, Compliance), data governance (BCBS 239, AML, GDPR), data warehouse/lake design, Azure cloud. Secondary Skills: MDM, metadata management, real-time modeling (payments/fraud), big data (Hadoop, Spark), streaming platforms (Confluent), M&A data integration, data cataloguing, documentation, regulatory trend awareness. Soft skills: Attention to detail, documentation, time management, and teamwork.

Posted 2 weeks ago

Apply

0.0 - 4.0 years

5 - 10 Lacs

Mumbai

Work from Office

Naukri logo

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Are you ready to dive headfirst into the captivating world of data engineering at Kyndryl? As a Data Engineer, you'll be the visionary behind our data platforms, crafting them into powerful tools for decision-makers. Your role? Ensuring a treasure trove of pristine, harmonized data is at everyone's fingertips. As a Data Engineer at Kyndryl, you'll be at the forefront of the data revolution, crafting and shaping data platforms that power our organization's success. This role is not just about code and databases; it's about transforming raw data into actionable insights that drive strategic decisions and innovation. In this role, you'll be engineering the backbone of our data infrastructure, ensuring the availability of pristine, refined data sets. With a well-defined methodology, critical thinking, and a rich blend of domain expertise, consulting finesse, and software engineering prowess, you'll be the mastermind of data transformation. Your journey begins by understanding project objectives and requirements from a business perspective, converting this knowledge into a data puzzle. You'll be delving into the depths of information to uncover quality issues and initial insights, setting the stage for data excellence. But it doesn't stop there. You'll be the architect of data pipelines, using your expertise to cleanse, normalize, and transform raw data into the final dataset—a true data alchemist. Armed with a keen eye for detail, you'll scrutinize data solutions, ensuring they align with business and technical requirements. Your work isn't just a means to an end; it's the foundation upon which data-driven decisions are made – and your lifecycle management expertise will ensure our data remains fresh and impactful. So, if you're a technical enthusiast with a passion for data, we invite you to join us in the exhilarating world of data engineering at Kyndryl. Let's transform data into a compelling story of innovation and growth. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience •Expertise in data mining, data storage and Extract-Transform-Load (ETL) processes •Experience in data pipelines development and tooling, e.g., Glue, Databricks, Synapse, or Dataproc •Experience with both relational and NoSQL databases, PostgreSQL, DB2, MongoDB •Excellent problem-solving, analytical, and critical thinking skills •Ability to manage multiple projects simultaneously, while maintaining a high level of attention to detail •Communication Skills: Must be able to communicate with both technical and non-technical colleagues, to derive technical requirements from business needs and problems Preferred Skills and Experience •Experience working as a Data Engineer and/or in cloud modernization •Experience in Data Modelling, to create conceptual model of how data is connected and how it will be used in business processes •Professional certification, e.g., Open Certified Technical Specialist with Data Engineering Specialization •Cloud platform certification, e.g., AWS Certified Data Analytics– Specialty, Elastic Certified Engineer, Google CloudProfessional Data Engineer, or Microsoft Certified: Azure Data Engineer Associate •Experience working with Kafka , ElasticSearch, Kibana & maintaining data lake •Managing interfaces, monitoring for production deployment including log shipping tool. •Experience in updates, upgrade, patches, VA closure, support with industry best tools •Degree in a scientific discipline, such as Computer Science, Software Engineering, or Information Technology Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 2 weeks ago

Apply

2.0 - 5.0 years

14 - 17 Lacs

Mumbai

Work from Office

Naukri logo

Experience with Scala object-oriented/object function Strong SQL background. Experience in Spark SQL, Hive, Data Engineer. SQL Experience with data pipelines & Data Lake Strong background in distributed comp. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL Experience with data pipelines & Data Lake Strong background in distributed comp Experience with Scala object-oriented/object function Strong SQL background Preferred technical and professional experience Core Scala Development Experience

Posted 2 weeks ago

Apply

5.0 - 7.0 years

5 - 9 Lacs

Kochi

Work from Office

Naukri logo

Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Databricks including Spark-based ETL, Delta Lake Good to have skills:Pyspark Job Summary We are seeking a highly skilled and experienced Senior Data Engineer to join our growing Data and Analytics team. The ideal candidate will have deep expertise in Databricks and cloud data warehousing, with a proven track record of designing and building scalable data pipelines, optimizing data architectures, and enabling robust analytics capabilities. This role involves working collaboratively with cross-functional teams to ensure the organization leverages data as a strategic asset. Your responsibilities will include: Roles and Responsibilities Design, build, and maintain scalable data pipelines and ETL processes using Databricks and other modern tools. Architect, implement, and manage cloud-based data warehousing solutions on Databricks (Lakehouse Architecture) Develop and maintain optimized data lake architectures to support advanced analytics and machine learning use cases. Collaborate with stakeholders to gather requirements, design solutions, and ensure high-quality data delivery. Optimize data pipelines for performance and cost efficiency. Implement and enforce best practices for data governance, access control, security, and compliance in the cloud. Monitor and troubleshoot data pipelines to ensure reliability and accuracy. Lead and mentor junior engineers, fostering a culture of continuous learning and innovation. Excellent communication skills Ability to work independently and along with client based out of western Europe. Professional and Technical Skills 3.5-5 years of experience in Data Engineering roles with a focus on cloud platforms. Proficiency in Databricks, including Spark-based ETL, Delta Lake, and SQL. Strong experience with one or more cloud platforms (AWS preferred). Handson Experience with Delta lake, Unity Catalog, and Lakehouse architecture concepts. Strong programming skills in Python and SQL; experience with Pyspark a plus. Solid understanding of data modeling concepts and practices (e.g., star schema, dimensional modeling). Knowledge of CI/CD practices and version control systems (e.g., Git). Familiarity with data governance and security practices, including GDPR and CCPA compliance. Additional Information Experience with Airflow or similar workflow orchestration tools. Exposure to machine learning workflows and MLOps. Certification in Databricks, AWS Familiarity with data visualization tools such as Power BI (do not remove the hyperlink)Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)

Posted 2 weeks ago

Apply

5.0 - 10.0 years

4 - 8 Lacs

Kolkata

Work from Office

Naukri logo

Project Role : Security Advisor Project Role Description : Lead the effort and teams to enable development and implementation of proprietary and innovative security solutions. Assess, manage and ensure compliance to risk reducing behaviors and processes. Must have skills : Identity Access Management (IAM), ControlM Administration, CyberArk Privileged Access Management, Google Cloud Platform Administration, Unix, Python, Shell Scripting, Data Lake, MYSQL, Java, JavaSc Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : A Bachelors or Masters Degree in Technology or similar streams as fulltime education is required Summary :As a Security Advisor, you will lead the effort and teams to enable the development and implementation of proprietary and innovative security solutions. Your typical day will involve assessing and managing security risks, ensuring compliance with established processes, and collaborating with various teams to enhance security measures. You will play a crucial role in guiding the organization towards adopting best practices in security management, while also fostering a culture of risk awareness and proactive security measures across the organization. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate training sessions to enhance team knowledge and skills in security practices.- Monitor and evaluate the effectiveness of security solutions and recommend improvements. Professional & Technical Skills: - Must To Have Skills: Proficiency in Identity Access Management (IAM), CyberArk Privileged Access Management, Google Cloud Platform Administration, ControlM Administration.- Good To Have Skills: Experience with security compliance frameworks such as ISO 27001 or NIST.- Strong understanding of risk assessment methodologies and security best practices.- Experience in implementing security policies and procedures.- Familiarity with incident response and management processes. Additional Information:- The candidate should have minimum 7.5 years of experience in Identity Access Management (IAM).- This position is based at our Kolkata office.- A Bachelors or Masters Degree in Technology or similar streams as fulltime education is required. Qualification A Bachelors or Masters Degree in Technology or similar streams as fulltime education is required

Posted 2 weeks ago

Apply

15.0 - 20.0 years

3 - 7 Lacs

Chennai

Work from Office

Naukri logo

Project Role : Business and Integration Practitioner Project Role Description : Assists in documenting the integration strategy endpoints and data flows. Is familiar with the entire project life-cycle, including requirements analysis, coding, testing, deployment, and operations to ensure successful integration. Under the guidance of the Architect, ensure the integration strategy meets business goals. Must have skills : Microsoft Dynamics 365 Supply Chain Management Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Business and Integration Practitioner, you will assist in documenting the integration strategy endpoints and data flows. Your typical day will involve collaborating with various teams to ensure that the integration strategy aligns with business objectives. You will engage in the entire project life-cycle, from requirements analysis to deployment, ensuring that all aspects of the integration are executed smoothly and effectively. Your role will also require you to communicate with stakeholders to gather insights and feedback, which will help refine the integration processes and enhance overall project success. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate workshops and meetings to gather requirements and feedback from stakeholders.- Monitor and evaluate the integration processes to identify areas for improvement. Professional & Technical Skills: - Must To Have Skills: Proficiency in Microsoft Dynamics 365 Supply Chain Management.- Strong understanding of integration strategies and data flow documentation.- Experience with project life-cycle management, including requirements analysis and deployment.- Ability to collaborate effectively with cross-functional teams.- Familiarity with testing methodologies and operational processes. Additional Information:- The candidate should have minimum 5 years of experience in Microsoft Dynamics 365 Supply Chain Management.- This position is based at our Chennai office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

15.0 - 20.0 years

3 - 7 Lacs

Navi Mumbai

Work from Office

Naukri logo

Project Role : Business and Integration Practitioner Project Role Description : Assists in documenting the integration strategy endpoints and data flows. Is familiar with the entire project life-cycle, including requirements analysis, coding, testing, deployment, and operations to ensure successful integration. Under the guidance of the Architect, ensure the integration strategy meets business goals. Must have skills : ALIP Product Configuration Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Business and Integration Practitioner, you will play a crucial role in assisting with the documentation of integration strategies, endpoints, and data flows. Your typical day will involve collaborating with various teams to ensure that the integration processes align with business objectives. You will engage in the entire project life-cycle, from requirements analysis to deployment, ensuring that all aspects of integration are effectively managed and executed. Your contributions will be vital in facilitating seamless integration and supporting the overall success of the project. Roles & Responsibilities:- Expected to be an SME, collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Facilitate communication between stakeholders to ensure alignment on integration strategies.- Monitor and evaluate integration processes to identify areas for improvement. Professional & Technical Skills: - Must To Have Skills: Proficiency in ALIP Product Configuration.- Strong understanding of integration strategies and data flow documentation.- Experience with project life-cycle management, including requirements analysis and deployment.- Ability to collaborate effectively with cross-functional teams.- Familiarity with testing methodologies and operational processes. Additional Information:- The candidate should have minimum 5 years of experience in ALIP Product Configuration.- This position is based in Mumbai.- A 15 years full time education is required. Qualification 15 years full time education

Posted 2 weeks ago

Apply

2.0 - 6.0 years

4 - 8 Lacs

Ahmedabad

Work from Office

Naukri logo

ANS Group is looking for Senior Data Engineer The job responsibilities of a Senior Data Engineer may include:1 Designing and implementing scalable and reliable data pipelines, data models, and data infrastructure for processing large and complex datasets 2 Developing and maintaining databases, data warehouses, and data lakes that store and manage the organization's data 3 Developing and implementing data integration and ETL (Extract, Transform, Load) processes to ensure that data flows smoothly and accurately between different systems and data sources 4 Ensuring data quality, consistency, and accuracy through data profiling, cleansing, and validation 5 Building and maintaining data processing and analytics systems that support business intelligence, machine learning, and other data-driven applications 6 Optimizing the performance and scalability of data systems and infrastructure to ensure that they can handle the organization's growing data needs To be a successful Senior Data Engineer, one must have in-depth knowledge of database architecture, data modeling, data integration, and ETL processes They should also be proficient in programming languages such as Python, Java, or SQL and have experience working with big data technologies like Hadoop, Spark, and NoSQL databases Strong communication and leadership skills

Posted 2 weeks ago

Apply

10.0 - 15.0 years

22 - 37 Lacs

Bengaluru

Work from Office

Naukri logo

Who We Are At Kyndryl, we design, build, manage and modernize the mission-critical technology systems that the world depends on every day. So why work at Kyndryl? We are always moving forward – always pushing ourselves to go further in our efforts to build a more equitable, inclusive world for our employees, our customers and our communities. The Role Join Kyndryl as a Data Architect where you will unlock the power of data to drive strategic decisions and shape the future of our business. As a key member of our team, you will harness your expertise in basic statistics, business fundamentals, and communication to uncover valuable insights and transform raw data into rigorous visualizations and compelling stories. In this role, you will have the opportunity to work closely with our customers as part of a top-notch team. You will dive deep into vast IT datasets, unraveling the mysteries hidden within, and discovering trends and patterns that will revolutionize our customers' understanding of their own landscapes. Armed with your advanced analytical skills, you will draw compelling conclusions and develop data-driven insights that will directly impact their decision-making processes. Your Role and Responsibilities: Data Architecture Design: Design scalable, secure, and high-performance data architectures, including data warehouses, data lakes, and BI solutions. Data Modeling: Develop and maintain complex data models (ER, star, and snowflake schemas) to support BI and analytics requirements. BI Strategy and Implementation: Lead the design and implementation of BI solutions using platforms like Power BI, Tableau, Qlik, and Looker. ETL/ELT Management: Architect efficient ETL/ELT pipelines for data transformation and integration across multiple data sources. Data Governance: Implement data quality, data lineage, and metadata management frameworks to ensure data reliability and compliance. Performance Optimization: Optimize data storage and retrieval processes for speed, scalability, and efficiency. Stakeholder Collaboration: Work closely with business and technical teams to define data requirements and deliver actionable insights. Cloud and Big Data: Utilize cloud-native tools like Azure Synapse, AWS Redshift, GCP BigQuery, and Databricks for large-scale data processing. Mentorship: Guide junior data engineers and BI developers on best practices and advanced techniques. Your unique ability to communicate and empathize with stakeholders will be invaluable. By understanding the business objectives and success criteria of each project, you will align your data analysis efforts seamlessly with our overarching goals. With your mastery of business valuation, decision-making, project scoping, and storytelling, you will transform data into meaningful narratives that drive real-world impact. At Kyndryl, we believe that data holds immense potential, and we are committed to helping you unlock that potential. You will have access to vast repositories of data, empowering you to delve deep to determine root causes of defects and variation. By gaining a comprehensive understanding of the data and its specific purpose, you will be at the forefront of driving innovation and making a difference. If you are ready to unleash your analytical ability, collaborate with industry experts, and shape the future of data-driven decision making, then join us as a Data Analyst at Kyndryl. Together, we will harness the power of data to redefine what is possible and create a future filled with limitless possibilities. Your Future at Kyndryl Every position at Kyndryl offers a way forward to grow your career. We have opportunities that you won’t find anywhere else, including hands-on experience, learning opportunities, and the chance to certify in all four major platforms. Whether you want to broaden your knowledge base or narrow your scope and specialize in a specific sector, you can find your opportunity here. Who You Are You’re good at what you do and possess the required experience to prove it. However, equally as important – you have a growth mindset; keen to drive your own personal and professional development. You are customer-focused – someone who prioritizes customer success in their work. And finally, you’re open and borderless – naturally inclusive in how you work with others. Required Skills and Experience Education: Bachelor's or master’s in computer science, Data Science, or a related field. Experience: 8+ years in data architecture, BI, and analytics roles. BI Tools: Power BI, Tableau, Qlik, Looker, SAP Analytics Cloud. Data Modeling: ER, dimensional, star, and snowflake schemas. Cloud Platforms: Azure, AWS, GCP, Snowflake. Databases: SQL Server, Oracle, MySQL, NoSQL (MongoDB, DynamoDB). ETL Tools: Informatica, Talend, SSIS, Apache Nifi. Scripting: Python, R, SQL, DAX, MDX. Soft Skills: Strong communication, problem-solving, and leadership abilities. Knowledge of deployment patterns. Strong documentation, troubleshooting, and data profiling skills. Excellent analytical, conceptual, and problem-solving abilities. Ability to manage multiple priorities and swiftly adapt to changing demands. Preferred Skills and Experience Microsoft Certified: Azure Data Engineer Associate AWS Certified Data Analytics - Specialty Google Professional Data Engineer Tableau Desktop Certified Professional Power BI Data Analyst Associate Being You Diversity is a whole lot more than what we look like or where we come from, it’s how we think and who we are. We welcome people of all cultures, backgrounds, and experiences. But we’re not doing it single-handily: Our Kyndryl Inclusion Networks are only one of many ways we create a workplace where all Kyndryls can find and provide support and advice. This dedication to welcoming everyone into our company means that Kyndryl gives you – and everyone next to you – the ability to bring your whole self to work, individually and collectively, and support the activation of our equitable culture. That’s the Kyndryl Way. What You Can Expect With state-of-the-art resources and Fortune 100 clients, every day is an opportunity to innovate, build new capabilities, new relationships, new processes, and new value. Kyndryl cares about your well-being and prides itself on offering benefits that give you choice, reflect the diversity of our employees and support you and your family through the moments that matter – wherever you are in your life journey. Our employee learning programs give you access to the best learning in the industry to receive certifications, including Microsoft, Google, Amazon, Skillsoft, and many more. Through our company-wide volunteering and giving platform, you can donate, start fundraisers, volunteer, and search over 2 million non-profit organizations. At Kyndryl, we invest heavily in you, we want you to succeed so that together, we will all succeed. Get Referred! If you know someone that works at Kyndryl, when asked ‘How Did You Hear About Us’ during the application process, select ‘Employee Referral’ and enter your contact's Kyndryl email address.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

16 - 20 Lacs

Hyderabad

Work from Office

Naukri logo

We are Hiring Data Engineer for a US based IT Company Based in Hyderabad. Candidates with minimum 5 Years of experience in Data Engineering can apply. This job is for 1 year contract only Job Title: Data Engineer Location: Hyderabad CTC: Upto 20 LPA Experience: 5+ Years Job Overview: We are looking for a seasoned Senior Data Engineer with deep hands-on experience in Talend and IBM DataStage to join our growing enterprise data team. This role will focus on designing and optimizing complex data integration solutions that support enterprise-wide analytics, reporting, and compliance initiatives. In this senior-level position, you will collaborate with data architects, analysts, and key stakeholders to facilitate large-scale data movement, enhance data quality, and uphold governance and security protocols. Key Responsibilities: Develop, maintain, and enhance scalable ETL pipelines using Talend and IBM DataStage Partner with data architects and analysts to deliver efficient and reliable data integration solutions Review and optimize existing ETL workflows for performance, scalability, and reliability Consolidate data from multiple sourcesboth structured and unstructuredinto data lakes and enterprise platforms Implement rigorous data validation and quality assurance procedures to ensure data accuracy and integrity Adhere to best practices for ETL development, including source control and automated deployment Maintain clear and comprehensive documentation of data processes, mappings, and transformation rules Support enterprise initiatives around data migration , modernization , and cloud transformation Mentor junior engineers and participate in code reviews and team learning sessions Required Qualifications: Minimum 5 years of experience in data engineering or ETL development Proficient with Talend (Open Studio and/or Talend Cloud) and IBM DataStage Strong skills in SQL , data profiling, and performance tuning Experience handling large datasets and complex data workflows Solid understanding of data warehousing , data modeling , and data lake architecture Familiarity with version control systems (e.g., Git) and CI/CD pipelines Strong analytical and troubleshooting skills Effective verbal and written communication, with strong documentation habits Preferred Qualifications: Prior experience in banking or financial services Exposure to cloud platforms such as AWS , Azure , or Google Cloud Knowledge of data governance tools (e.g., Collibra, Alation) Awareness of data privacy regulations (e.g., GDPR, CCPA) Experience working in Agile/Scrum environments For further assistance contact/whatsapp: 9354909518 or write to priya@gist.org.in

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 12 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

Hello Candidates, We are Hiring !! Job Position - Data Streaming Engineer Experience - 5+ years Location - Mumbai, Pune , Chennai , Bangalore Work mode - Hybrid ( 3 days WFO) JOB DESCRIPTION Request for Data Streaming Engineer Data Streaming @ offshore : • Flink , Python Language. • Data Lake Systems. (OLAP Systems). • SQL (should be able to write complex SQL Queries) • Orchestration (Apache Airflow is preferred). • Hadoop (Spark and Hive: Optimization of Spark and Hive apps). • Snowflake (good to have). • Data Quality (good to have). • File Storage (S3 is good to have) NOTE - Candidates can share their resume on - shrutia.talentsketchers@gmail.com

Posted 2 weeks ago

Apply

11.0 - 17.0 years

20 - 35 Lacs

Indore, Hyderabad

Work from Office

Naukri logo

Greetings of the Day !! We have job opening for Microsoft Fabric + ADF with one of our clients. If you are interested in this position, please share update resume in this email id : shaswati.m@bct-consulting.com . * Primary Skill Microsoft Fabric Secondary Skill 1 Azure Data Factory (ADF) 12+ years of experience in Microsoft Azure Data Engineering for analytical projects. Proven expertise in designing, developing, and deploying high-volume, end-to-end ETL pipelines for complex models, including batch, and real-time data integration frameworks using Azure, Microsoft Fabric and Databricks. Extensive hands-on experience with Azure Data Factory, Databricks (with Unity Catalog), Azure Functions, Synapse Analytics, Data Lake, Delta Lake, and Azure SQL Database for managing and processing large-scale data integrations. Experience in Databricks cluster optimization and workflow management to ensure cost-effective and high-performance processing. Sound knowledge of data modelling, data governance, data quality management, and data modernization processes. Develop architecture blueprints and technical design documentation for Azure-based data solutions. Provide technical leadership and guidance on cloud architecture best practices, ensuring scalable and secure solutions. Keep abreast of emerging Azure technologies and recommend enhancements to existing systems. Lead proof of concepts (PoCs) and adopt agile delivery methodologies for solution development and delivery.

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 - 0 Lacs

Hyderabad

Work from Office

Naukri logo

Experience Required: 3+ years Technical knowledge: AWS, Python, SQL, S3, EC2, Glue, Athena, Lambda, DynamoDB, RedShift, Step Functions, Cloud Formation, CI/CD Pipelines, Github, EMR, RDS,AWS Lake Formation, GitLab, Jenkins and AWS CodePipeline. Role Summary: As a Senior Data Engineer,with over 3 years of expertise in Python, PySpark, SQL to design, develop and optimize complex data pipelines, support data modeling, and contribute to the architecture that supports big data processing and analytics to cutting-edge cloud solutions that drive business growth. You will lead the design and implementation of scalable, high-performance data solutions on AWS and mentor junior team members.This role demands a deep understanding of AWS services, big data tools, and complex architectures to support large-scale data processing and advanced analytics. Key Responsibilities: Design and develop robust, scalable data pipelines using AWS services, Python, PySpark, and SQL that integrate seamlessly with the broader data and product ecosystem. Lead the migration of legacy data warehouses and data marts to AWS cloud-based data lake and data warehouse solutions. Optimize data processing and storage for performance and cost. Implement data security and compliance best practices, in collaboration with the IT security team. Build flexible and scalable systems to handle the growing demands of real-time analytics and big data processing. Work closely with data scientists and analysts to support their data needs and assist in building complex queries and data analysis pipelines. Collaborate with cross-functional teams to understand their data needs and translate them into technical requirements. Continuously evaluate new technologies and AWS services to enhance data capabilities and performance. Create and maintain comprehensive documentation of data pipelines, architectures, and workflows. Participate in code reviews and ensure that all solutions are aligned to pre-defined architectural specifications. Present findings to executive leadership and recommend data-driven strategies for business growth. Communicate effectively with different levels of management to gather use cases/requirements and provide designs that cater to those stakeholders. Handle clients in multiple industries at the same time, balancing their unique needs. Provide mentoring and guidance to junior data engineers and team members. Requirements: 3+ years of experience in a data engineering role with a strong focus on AWS, Python, PySpark, Hive, and SQL. Proven experience in designing and delivering large-scale data warehousing and data processing solutions. Lead the design and implementation of complex, scalable data pipelines using AWS services such as S3, EC2, EMR, RDS, Redshift, Glue, Lambda, Athena, and AWS Lake Formation. Bachelor's or Masters degree in Computer Science, Engineering, or a related technical field. Deep knowledge of big data technologies and ETL tools, such as Apache Spark, PySpark, Hadoop, Kafka, and Spark Streaming. Implement data architecture patterns, including event-driven pipelines, Lambda architectures, and data lakes. Incorporate modern tools like Databricks, Airflow, and Terraform for orchestration and infrastructure as code. Implement CI/CD using GitLab, Jenkins, and AWS CodePipeline. Ensure data security, governance, and compliance by leveraging tools such as IAM, KMS, and AWS CloudTrail. Mentor junior engineers, fostering a culture of continuous learning and improvement. Excellent problem-solving and analytical skills, with a strategic mindset. Strong communication and leadership skills, with the ability to influence stakeholders at all levels. Ability to work independently as well as part of a team in a fast-paced environment. Advanced data visualization skills and the ability to present complex data in a clear and concise manner. Excellent communication skills, both written and verbal, to collaborate effectively across teams and levels. Preferred Skills: Experience with Databricks, Snowflake, and machine learning pipelines. Exposure to real-time data streaming technologies and architectures. Familiarity with containerization and serverless computing (Docker, Kubernetes, AWS Lambda).

Posted 2 weeks ago

Apply

4.0 - 8.0 years

12 - 18 Lacs

Hyderabad, Chennai, Coimbatore

Hybrid

Naukri logo

We are seeking a skilled and motivated Data Engineer to join our dynamic team. The ideal candidate will have experience in designing, developing, and maintaining scalable data pipelines and architectures using Hadoop, PySpark, ETL processes , and Cloud technologies . Responsibilities: Design, develop, and maintain data pipelines for processing large-scale datasets. Build efficient ETL workflows to transform and integrate data from multiple sources. Develop and optimize Hadoop and PySpark applications for data processing. Ensure data quality, governance, and security standards are met across systems. Implement and manage Cloud-based data solutions (AWS, Azure, or GCP). Collaborate with data scientists and analysts to support business intelligence initiatives. Troubleshoot performance issues and optimize query executions in big data environments. Stay updated with industry trends and advancements in big data and cloud technologies . Required Skills: Strong programming skills in Python, Scala, or Java . Hands-on experience with Hadoop ecosystem (HDFS, Hive, Spark, etc.). Expertise in PySpark for distributed data processing. Proficiency in ETL tools and workflows (SSIS, Apache Nifi, or custom pipelines). Experience with Cloud platforms (AWS, Azure, GCP) and their data-related services. Knowledge of SQL and NoSQL databases. Familiarity with data warehousing concepts and data modeling techniques. Strong analytical and problem-solving skills. Interested can reach us at +91 7305206696/ saranyadevib@talentien.com

Posted 2 weeks ago

Apply

5.0 - 10.0 years

10 - 15 Lacs

Chennai, Bengaluru

Work from Office

Naukri logo

job requisition idJR1027452 Overall Responsibilities: Data Pipeline Development: Design, develop, and maintain highly scalable and optimized ETL pipelines using PySpark on the Cloudera Data Platform, ensuring data integrity and accuracy. Data Ingestion: Implement and manage data ingestion processes from a variety of sources (e.g., relational databases, APIs, file systems) to the data lake or data warehouse on CDP. Data Transformation and Processing: Use PySpark to process, cleanse, and transform large datasets into meaningful formats that support analytical needs and business requirements. Performance Optimization: Conduct performance tuning of PySpark code and Cloudera components, optimizing resource utilization and reducing runtime of ETL processes. Data Quality and Validation: Implement data quality checks, monitoring, and validation routines to ensure data accuracy and reliability throughout the pipeline. Automation and Orchestration: Automate data workflows using tools like Apache Oozie, Airflow, or similar orchestration tools within the Cloudera ecosystem. Monitoring and Maintenance: Monitor pipeline performance, troubleshoot issues, and perform routine maintenance on the Cloudera Data Platform and associated data processes. Collaboration: Work closely with other data engineers, analysts, product managers, and other stakeholders to understand data requirements and support various data-driven initiatives. Documentation: Maintain thorough documentation of data engineering processes, code, and pipeline configurations. Software : Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Familiarity with Hadoop, Kafka, and other distributed computing tools. Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Strong scripting skills in Linux. Category-wise Technical Skills: PySpark: Advanced proficiency in PySpark, including working with RDDs, DataFrames, and optimization techniques. Cloudera Data Platform: Strong experience with Cloudera Data Platform (CDP) components, including Cloudera Manager, Hive, Impala, HDFS, and HBase. Data Warehousing: Knowledge of data warehousing concepts, ETL best practices, and experience with SQL-based tools (e.g., Hive, Impala). Big Data Technologies: Familiarity with Hadoop, Kafka, and other distributed computing tools. Orchestration and Scheduling: Experience with Apache Oozie, Airflow, or similar orchestration frameworks. Scripting and Automation: Strong scripting skills in Linux. Experience: 5-12 years of experience as a Data Engineer, with a strong focus on PySpark and the Cloudera Data Platform. Proven track record of implementing data engineering best practices. Experience in data ingestion, transformation, and optimization on the Cloudera Data Platform. Day-to-Day Activities: Design, develop, and maintain ETL pipelines using PySpark on CDP. Implement and manage data ingestion processes from various sources. Process, cleanse, and transform large datasets using PySpark. Conduct performance tuning and optimization of ETL processes. Implement data quality checks and validation routines. Automate data workflows using orchestration tools. Monitor pipeline performance and troubleshoot issues. Collaborate with team members to understand data requirements. Maintain documentation of data engineering processes and configurations. Qualifications: Bachelors or Masters degree in Computer Science, Data Engineering, Information Systems, or a related field. Relevant certifications in PySpark and Cloudera technologies are a plus. Soft Skills: Strong analytical and problem-solving skills. Excellent verbal and written communication abilities. Ability to work independently and collaboratively in a team environment. Attention to detail and commitment to data quality.

Posted 2 weeks ago

Apply

5.0 - 10.0 years

11 - 15 Lacs

Gurugram

Work from Office

Naukri logo

Position Summary This is the Requisition for Employee Referrals Campaign and JD is Generic. We are looking for Associates with 5+ years of experience in delivering solutions around Data Engineering, Big data analytics and data lakes, MDM, BI, and data visualization. Experienced to Integrate and standardize structured and unstructured data to enable faster insights using cloud technology. Enabling data-driven insights across the enterprise. Job Responsibilities He/she should be able to design implement and deliver complex Data Warehousing/Data Lake, Cloud Data Management, and Data Integration project assignments. Technical Design and Development – Expertise in any of the following skills. Any ETL tools (Informatica, Talend, Matillion, Data Stage), andhosting technologies like the AWS stack (Redshift, EC2) is mandatory. Any BI toolsamong Tablau, Qlik & Power BI and MSTR. Informatica MDM, Customer Data Management. Expert knowledge of SQL with the capability to performance tune complex SQL queries in tradition and distributed RDDMS systems is must. Experience across Python, PySpark and Unix/Linux Shell Scripting. Project Managementis must to have. Should be able create simple to complex project plans in Microsoft Project Plan and think in advance about potential risks and mitigation plans as per project plan. Task Management – Should be able to onboard team on the project plan and delegate tasks to accomplish milestones as per plan. Should be comfortable in discussing and prioritizing work items with team members in an onshore-offshore model. Handle Client Relationship – Manage client communication and client expectations independently or with support of reporting manager. Should be able to deliver results back to the Client as per plan. Should have excellent communication skills. Education Bachelor of Technology Master's Equivalent - Engineering Work Experience Overall, 5- 7years of relevant experience inData Warehousing, Data management projects with some experience in the Pharma domain. We are hiring for following roles across Data management tech stacks - ETL toolsamong Informatica, IICS/Snowflake,Python& Matillion and other Cloud ETL. BI toolsamong Power BI and Tableau. MDM - Informatica/ Raltio, Customer Data Management. Azure cloud Developer using Data Factory and Databricks Data Modeler-Modelling of data - understanding source data, creating data models for landing, integration. Python/PySpark -Spark/ PySpark Design, Development, and Deployment

Posted 2 weeks ago

Apply

8.0 - 10.0 years

6 - 10 Lacs

Hyderabad

Work from Office

Naukri logo

What you will do Amgens Clinical Computation Platform Product Team manages a core set of clinical computation solutions that support global clinical development. This team is responsible for building and maintaining systems for clinical data storage, data auditing and security management, analysis and reporting capabilities. These capabilities are pivotal in Amgens goal to serve patients. The Principal IS Architect will define the architecture vision, create roadmaps and support the design and implementation of advanced computational platforms to support clinical development, ensuring that IT strategies align with business goals. The Principal IS Architect will work closely with partners across departments, including CfDA, GDO, CfOR, CfTI, CPMS and IT teams, to design and implement scalable, reliable, and hard-working solutions. Develop and maintain the enterprise architecture vision and strategy, ensuring alignment with business objectives Create and maintain architectural roadmaps that guide the evolution of IT systems and capabilities Establish and enforce architectural standards, policies, and governance frameworks Evaluate emerging technologies and assess their potential impact on the enterprise/domain/solution architecture Identify and mitigate architectural risks, ensuring that IT systems are scalable, secure, and resilient Maintain comprehensive documentation of the architecture, including principles, standards, and models Drive continuous improvement in the architecture by finding opportunities for innovation and efficiency Work with partners to gather and analyze requirements, ensuring that solutions meet both business and technical needs Evaluate and recommend technologies and tools that best fit the solution requirements Ensure seamless integration between systems and platforms, both within the organization and with external partners Design systems that can scale to meet growing business needs and performance demands Develop and maintain logical, physical, and conceptual data models to support business needs Establish and enforce data standards, governance policies, and best practices Design and manage metadata structures to enhance information retrieval and usability Basic Qualifications: Masters degree with 8 to 10 years of experience in Computer Science, IT or related field OR Bachelors degree with 10 to 14 years of experience Computer Science, IT or related field OR Diploma with 14 to 18 years of experience Computer Science, IT or related field Proficiency in designing scalable, secure, and cost-effective solutions. Expertise in cloud platforms (AWS, Azure, GCP), data lakes, and data warehouses. Experience in evaluating and selecting technology vendors. Ability to create and demonstrate proof of concept solutions to validate technical feasibility. Strong knowledge of Clinical Research and Development Domain Experience working in agile methodology, including Product Teams and Product Development models Preferred Qualifications: Strong solution design and problem-solving skills Experience in developing differentiated and deliverable solutions Ability to analyze client requirements and translate them into solutions Experience with machine learning and artificial intelligence applications in clinical research. Strong programming skills in languages such as Python, R, or Java. Experience of DevOps, Continuous Integration, and Continuous Delivery methodology Soft Skills: Excellent critical-thinking and problem-solving skills Good communication and collaboration skills Demonstrated awareness of how to function in a team setting Demonstrated awareness of presentation skills

Posted 2 weeks ago

Apply

10.0 - 15.0 years

30 - 45 Lacs

Mumbai, Navi Mumbai, Gurugram

Work from Office

Naukri logo

Role Description We are looking for a suitable candidate for the opening of Data/Technical Architect role for Data Management, preferably for one who has worked in Insurance or Banking and Financial Services domain and holds relevant experience of 10+ years. The candidate should be willing to take up the role of Senior Manager/Associate Director in an organization based on overall experience. Location : Mumbai and Gurugram Relevant experience : 10+ years Key Responsibilities: Provide technical leadership regarding data strategy and roadmap exercises, data architecture definition, business intelligence/data warehouse product selection, design and implementation for the enterprise. Proven track record of success in implementations for Data Lake, Data Warehouse/Data Marts, Data Lakehouse on Cloud Data Platform. Hands on experience in leading large-scale global data warehousing and analytics projects. Demonstrated industry leadership in the fields of database, data warehousing or data sciences. Be accountable for creating end-to-end solution design and development approach on Cloud Platform including sizing and TCO. Should have Deep technical expertise on Cloud Data Components but not limed to Cloud Storage (S3/ADLS/GCS), EMR/Data Bricks, Redshift/Synapse/Big Query, Glue/Azure Data Factory/Data Fusion/Data Flow, Cloud Functions, Event Bridge, etc. NoSQL understanding and use case application DynamoDB, Cosmos DB or any other technology. Should have worked extensively in creating re-usable assets for Data Integration, transformation, auditing and validation frameworks Knowledge of any Scripting/Programming skills Python, Java, Scala, Go Implementation and tuning experience of data warehousing platforms, including knowledge of data warehouse schema design, query tuning and optimization, and data migration and integration. Experience of requirements for the analytics presentation layer including dashboards, reporting, and OLAP Extensive experience in designing Data architecture, data modeling, design, development, data migration and data integration aspects of SDLC. Participate and/or lead in design sessions, demos and prototype sessions, testing and training workshops with business users and other IT associates Should have experience designing new or enhancing existing architecture frameworks and implementing them in a cooperative and collaborative setting Troubleshooting skills, ability to determine impacts, ability to resolve complex issues, and initiative in stressful situations Contributed significantly in Business Development activities. Strong oral and written communication and interpersonal skills Working experience on Agile & Scrum methods Develop documentation and maintain as needed Support projects by providing SME knowledge to project teams in the areas of Enterprise Data Management Interested candidates please share your Cvs to mudesh.kumar.tpr@pwc.com

Posted 2 weeks ago

Apply

6.0 - 11.0 years

15 - 27 Lacs

Hyderabad

Hybrid

Naukri logo

Job Description for Consultant - Data Engineer About Us: Chryselys is a Pharma Analytics & Business consulting company that delivers data-driven insights leveraging AI-powered, cloud-native platforms to achieve high-impact transformations. We specialize in digital technologies and advanced data science techniques that provide strategic and operational insights. Who we are: People - Our team of industry veterans, advisors and senior strategists have diverse backgrounds and have worked at top tier companies. Quality - Our goal is to deliver the value of a big five consulting company without the big five cost. Technology - Our solutions are Business centric built on cloud native technologies. Key Responsibilities and Core Competencies: • You will be responsible for managing and delivering multiple Pharma projects. • Leading a team of atleast 8 members, resolving their technical and business related problems and other queries. • Responsible for client interaction; requirements gathering, creating required documents, development, quality assurance of the deliverables. • Good collaboration with onshore and Senior folks. • Should have fair understanding of Data Capabilities (Data Management, Data Quality, Master and Reference Data). • Exposure to Project management methodologies including Agile and Waterfall. • Experience working in RFPs would be a plus. Required Technical Skills: • Proficient in Python, Pyspark, SQL • Extensive hands-on experience in big data processing and cloud technologies like AWS and Azure services, Databricks etc . • Strong experience working with cloud data warehouses like Snowflake, Redshift, Azure etc. • Good experience in ETL, Data Modelling, building ETL Pipelines. • Conceptual knowledge of Relational database technologies, Data Lake, Lake Houses etc. • Sound knowledge in Data operations, quality and data governance. Preferred Qualifications: • Bachelors or master’s Engineering/ MCA or equivalent degree. • 6-13 years of experience as Data Engineer , with atleast 2 years in managing medium to large scale programs. • Minimum 5 years of Pharma and Life Science domain exposure in IQVIA, Veeva, Symphony, IMS etc. • High motivation, good work ethic, maturity, self-organized and personal initiative. • Ability to work collaboratively and providing the support to the team. • Excellent written and verbal communication skills. • Strong analytical and problem-solving skills. Location • Preferably Hyderabad, India

Posted 2 weeks ago

Apply

5.0 - 10.0 years

14 - 24 Lacs

Bengaluru

Remote

Naukri logo

Detailed job description - Skill Set: Strong Knowledge in Databricks. This includes creating scalable ETL (Extract, Transform, Load) processes, data lakes Strong knowledge in Python and SQL Strong experience with AWS cloud platforms is a must Good understanding of data modeling principles and data warehousing concepts Strong knowledge of optimizing ETL jobs, batch processing jobs to ensure high performance and efficiency Implementing data quality checks, monitoring data pipelines, and ensuring data consistency and security Hands on experience with Databricks features like Unity Catalog Mandatory Skills Databricks, AWS

Posted 2 weeks ago

Apply

4.0 - 9.0 years

0 - 1 Lacs

Hyderabad

Work from Office

Naukri logo

Job Title: Software Engineer -Data Engineer Position: Software Engineer Experience: 4-9 years Category: Software Development/ Engineering Shift Timings: 1:00 pm to 10:00 pm Main location: Hyderabad Work Type: Work from office Notice Period: 0-30 Days Skill: Python, Pyspark, Data Bricks Employment Type: Full Time • Bachelor's in Computer Science, Computer Engineering or related field Required qualifications to be successful in this role Must have Skills: • 3+ yrs. Development experience with Spark (PySpark), Python and SQL. • Extensive knowledge building data pipelines • Hands on experience with Databricks Devlopment • Strong experience with • Strong experience developing on Linux OS. • Experience with scheduling and orchestration (e.g. Databricks Workflows,airflow, prefect, control-m). Good to have skills: • Solid understanding of distributed systems, data structures, design principles. • Agile Development Methodologies (e.g. SAFe, Kanban, Scrum). • Comfortable communicating with teams via showcases/demos. • Play key role in establishing and implementing migration patterns for the Data Lake Modernization project. • Actively migrate use cases from our on premises Data Lake to Databricks on GCP. • Collaborate with Product Management and business partners to understand use case requirements and reporting. • Adhere to internal development best practices/lifecycle (e.g. Testing, Code Reviews, CI/CD, Documentation) . • Document and showcase feature designs/workflows. • Participate in team meetings and discussions around product development. • Stay up to date on industry latest industry trends and design patterns. • 3+ years experience with GIT. • 3+ years experience with CI/CD (e.g. Azure Pipelines). • Experience with streaming technologies, such as Kafka, Spark. • Experience building applications on Docker and Kubernetes. • Cloud experience (e.g. Azure, Google). Interested Candidates can drop your Resume on Mail id :- " kalyan.v@talent21.in "

Posted 2 weeks ago

Apply

4.0 - 8.0 years

0 - 1 Lacs

Hyderabad

Work from Office

Naukri logo

Job Title: Software Engineer -Data Engineer Position: Software Engineer Experience: 4-6 years (Less YOE will be Rejected) Category: Software Development/ Engineering Shift Timings: 1:00 pm to 10:00 pm Main location: Hyderabad Work Type: Work from office Notice Period: 0-30 Days Skill: Python, Pyspark, Data Bricks Employment Type: Full Time • Bachelor's in Computer Science, Computer Engineering or related field Required qualifications to be successful in this role Must have Skills: • 3+ yrs. Development experience with Spark (PySpark), Python and SQL. • Extensive knowledge building data pipelines • Hands on experience with Databricks Devlopment • Strong experience with • Strong experience developing on Linux OS. • Experience with scheduling and orchestration (e.g. Databricks Workflows,airflow, prefect, control-m). Good to have skills: • Solid understanding of distributed systems, data structures, design principles. • Agile Development Methodologies (e.g. SAFe, Kanban, Scrum). • Comfortable communicating with teams via showcases/demos. • Play key role in establishing and implementing migration patterns for the Data Lake Modernization project. • Actively migrate use cases from our on premises Data Lake to Databricks on GCP. • Collaborate with Product Management and business partners to understand use case requirements and reporting. • Adhere to internal development best practices/lifecycle (e.g. Testing, Code Reviews, CI/CD, Documentation) . • Document and showcase feature designs/workflows. • Participate in team meetings and discussions around product development. • Stay up to date on industry latest industry trends and design patterns. • 3+ years experience with GIT. • 3+ years experience with CI/CD (e.g. Azure Pipelines). • Experience with streaming technologies, such as Kafka, Spark. • Experience building applications on Docker and Kubernetes. • Cloud experience (e.g. Azure, Google). Interested Candidates can drop your Resume on Mail id :- " tarun.k@talent21.in "

Posted 2 weeks ago

Apply

4.0 - 9.0 years

4 - 8 Lacs

Noida

Work from Office

Naukri logo

Req ID: 313916 We are currently seeking a Alation Admin or MSTR Cloud Admin to join our team in NOIDA, Uttar Pradesh (IN-UP), India (IN). Alation Admin Also known as an Alation Data Catalog Administrator Responsible for managing and maintaining the Alation Data Catalog, a platform that helps organizations discover, understand, and govern their data assets. 1. Platform Administration Installing, configuring, and maintaining the Alation Data Catalog platform to ensure its optimal performance and reliability. 2. User Management Managing user access, permissions, and roles within Alation, ensuring proper authentication and authorization for data access. 3. Data Governance Implementing and enforcing data governance policies, including data classification, data lineage, and data stewardship, to maintain data quality and compliance. 4. Data Catalog Management Curating and organizing metadata and data assets within the catalog, ensuring accurate and up-to-date information is available to users. 5. Integration: Collaborating with other IT teams to integrate Alation with data sources, databases, data lakes, and other data management systems. 6. Metadata Management Overseeing the extraction and ingestion of metadata from various data sources into Alation, including data dictionaries, business glossaries, and technical metadata. 7. Security Implementing and maintaining security measures, such as encryption, access controls, and auditing, to protect sensitive data and catalog information. 8. Training and Support Providing training to users on how to effectively use the Alation Data Catalog and offering support for catalog-related inquiries and issues. 9. Data Discovery Assisting users in discovering and accessing data assets within the catalog, promoting self-service data discovery. 10. Collaboration Collaborating with data owners, data stewards, and data users to understand their data needs and ensure the catalog meets those requirements. 11. Performance Monitoring Monitoring the performance of the Alation Data Catalog platform, identifying and resolving issues to ensure optimal functionality. 12. Upgrades and Maintenance Planning and executing platform upgrades and applying patches to stay up to date with Alation releases. 13. Documentation Maintaining documentation for catalog configurations, processes, and best practices. 14. Reporting and Analytics Generating reports and insights from Alation to track data usage, data lineage, and user activity. 15. Data Quality Monitoring and improving data quality within the catalog and assisting in data quality initiatives. 16. Stay Current Staying informed about Alation updates, new features, and industry best practices in data catalog administration. An Alation Admin plays a critical role in enabling organizations to effectively manage their data assets, foster data collaboration, and ensure data governance and compliance across the enterprise. --------------------------- MicroStrategy Cloud Admin Minimum 4+ years of MSTR Administration with following core attributes "“ Hands-on maintenance and administration experience in MicroStrategy 10.x Business Intelligence product suite, AWS Cloud platform Experience on enterprise portal integration, mobile integration, write back to source data based on analysis by business users, alerts via mail, mobile based on pre-defined events Ability to define and review complex Metric Ability to architect MSTR Cubes for solving complex business problems Good conceptual knowledge and working experience on meta data creation framework models universe etc., creating report specifications, integration test planning & testing, unit test planning & testing, UAT & implementation support Strong knowledge of quality processes SDLC, Review, Test, Configuration Management, Release Management, Defect Prevention Knowledge of Database is essential and ability to review the SQL passes and make decisions based on query timings Good experience on MicroStrategy upgrade, configurations on Linux Have hands on experience on creating MSTR deployment packages, Command manager scripts setup and maintain proper object and data security working experience of Configure, maintain, and administer multiple environments Open to work in different shifts as per project need Excellent Communication Skills (Written, Verbal, team work and issue resolution) Activities: Provide support for MicroStrategy"™s Business Intelligence product suite, AWS Cloud platform, and its underlying technologies Use your strong communication skills Resolve application and infrastructure situations as they arise Perform day to day management of the MicroStrategy Cloud infrastructure (on AWS) including alert monitoring/remediation, change management, incident troubleshooting and resolution. Participate in scheduled and emergency infrastructure maintenance activities Collaborate and communicate effectively with peers, internal application and software development teams Maintain high quality documentation for all related tasks Work in a strong team environment Independently manage Production MSTR environment (and associated lower environments as Dev, UAT) Manage upgrades and vulnerabilities

Posted 2 weeks ago

Apply

6.0 - 10.0 years

25 - 30 Lacs

Pune, Bengaluru, Mumbai (All Areas)

Hybrid

Naukri logo

Minimum of 6 yrs of Data Engineering Exp Must be an expert in SQL, Data Lake, Azure Data Factory, Azure Synapse, ETL, Databricks Must be an expert in data modeling, writing complex queries in SQL Ability to convert SQL code to PySpark Required Candidate profile Exp with SQL, Python, data modelling, data warehousing & dimensional modelling concept Familiarity with data governance, data security & Production deployments using Azure DevOps CICD pipelines.

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies