Jobs
Interviews

99 Data Lakes Jobs - Page 2

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 12.0 years

0 Lacs

karnataka

On-site

At PwC, our team focused on data and analytics applies data to drive insights and guide strategic business decisions. Utilizing advanced analytics techniques, we assist clients in optimizing operations and achieving their goals. As a member of our data analysis team, you will specialize in leveraging sophisticated analytical methods to extract valuable insights from extensive datasets, enabling data-driven decision-making. Your role will involve utilizing skills in data manipulation, visualization, and statistical modeling to support clients in resolving intricate business challenges. We are seeking a visionary Generative AI Architect at the Manager level to join PwC US - Acceleration Center. In this leadership position, you will be responsible for designing and implementing cutting-edge Generative AI solutions using technologies such as Azure OpenAI Service, GPT models, and multi-agent frameworks. Your role will involve driving innovation through scalable cloud architectures, optimizing AI infrastructure, and leading cross-functional teams in deploying transformative AI solutions. The ideal candidate will possess deep expertise in Generative AI technologies, data engineering, Agentic AI, and cloud platforms like Microsoft Azure, with a strong emphasis on operational excellence and ethical AI practices. Responsibilities: - **Architecture Design:** Design and implement scalable, secure, and high-performance architectures for Generative AI applications. Integrate Generative AI models into existing platforms and lead the development of AI agents capable of orchestrating multi-step tasks. - **Model Development And Deployment:** Fine-tune pre-trained generative models, develop data collection and preparation strategies, and deploy appropriate Generative AI frameworks. - **Innovation And Strategy:** Stay updated on the latest Generative AI advancements, recommend innovative applications, and define and execute AI strategy roadmaps. - **Collaboration And Leadership:** Collaborate with cross-functional teams, mentor team members, and lead a team of data scientists, GenAI engineers, devops, and software developers. - **Performance Optimization:** Monitor and optimize the performance of AI models, agents, and systems to ensure robustness and accuracy, as well as optimize computational costs and infrastructure utilization. - **Ethical And Responsible AI:** Ensure compliance with ethical AI practices, data privacy regulations, and governance frameworks, and implement safeguards against bias and misuse. Requirements: - Bachelors or masters degree in computer science, Data Science, or related field. - 8+ years of relevant technical/technology experience, with expertise in GenAI projects. - Advanced programming skills in Python and fluency in data processing frameworks like Apache Spark. - Experience with GenAI foundational models and open-source models. - Proficiency in system design for Agentic architecture and real-time data processing systems. - Familiarity with cloud computing platforms and containerization technologies. - Strong leadership, problem-solving, and analytical abilities. - Excellent communication and collaboration skills. Nice To Have Skills: - Experience with technologies like Datadog and Splunk. - Familiarity with emerging Model Context Protocols and dynamic tool integration. - Relevant solution architecture certificates and continuous professional development in data engineering and GenAI. Professional And Educational Background: BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA/ Any Degree.,

Posted 1 week ago

Apply

12.0 - 22.0 years

25 - 32 Lacs

Chennai, Bengaluru

Work from Office

Technical Manager, Data Engineering Location : Chennai/Bangalore Experience : 15+ Years Employment Type : Full Time Role Description: We are looking for a seasoned Technical Manager to lead our Data Engineering function. This role demands a deep understanding of data architecture, pipeline development, and data infrastructure. The ideal candidate will be a thought leader in the data engineering space, capable of guiding and mentoring a team, collaborating effectively with various business units, and driving the adoption of cutting-edge tools and technologies to build robust, scalable, and efficient data solutions. Responsibilities Define and champion the strategic direction for data engineering, staying abreast of industry trends and emerging technologies. Lead, mentor, and develop a high-performing team of data engineers, fostering a culture of technical excellence, innovation, and continuous learning. Design, implement, and maintain scalable, reliable, and secure data pipelines and infrastructure. Ensure data quality, integrity, and accessibility. Oversee the end-to-end delivery of data engineering projects, ensuring timely completion, adherence to best practices, and alignment with business objectives. Partner closely with pre-sales, sales, marketing, Business Intelligence, Data Science, and other departments to understand data needs, propose solutions, and support resource deployment for active data projects. Evaluate, recommend, and implement new data engineering tools, platforms, and methodologies to enhance capabilities and efficiency. Identify and address performance bottlenecks in data systems, ensuring optimal data processing and storage. Tools & Technologies Cloud Platforms : AWS (S3, Glue, EMR, Redshift, Athena, Lambda, Kinesis), Azure (Data Lake Storage, Data Factory, Databricks, Synapse Analytics), Google Cloud Platform (Cloud Storage, Dataflow, Dataproc, BigQuery). Big Data Frameworks : Apache Spark, Apache Flink, Apache Kafka, HDFS Data Warehousing/Lakes: Snowflake, Databricks Lakehouse, Google BigQuery, Amazon Redshift, Azure Synapse Analytics. ETL/ELT Tools : Apache Airflow, Talend, Informatica, DBT, Fivetran, Stitch. Data Modeling : Star Schema, Snowflake Schema, Data Vault. Databases : PostgreSQL, MySQL, MongoDB, Cassandra, DynamoDB. Programming Languages: Python (Pandas, PySpark), Scala, Java. Containerization/Orchestration : Docker, Kubernetes. Version Control : Git.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

This is a full-time Data Engineer position with D Square Consulting Services Pvt Ltd, based in Pan-India with a hybrid work model. You should have at least 5 years of experience and be able to join immediately. As a Data Engineer, you will be responsible for designing, building, and scaling data pipelines and backend services supporting analytics and business intelligence platforms. A strong technical foundation, Python expertise, API development experience, and familiarity with containerized CI/CD-driven workflows are essential for this role. Your key responsibilities will include designing, implementing, and optimizing data pipelines and ETL workflows using Python tools, building RESTful and/or GraphQL APIs, collaborating with cross-functional teams, containerizing data services with Docker, managing deployments with Kubernetes, developing CI/CD pipelines using GitHub Actions, ensuring code quality, and optimizing data access and transformation. The required skills and qualifications for this role include a Bachelor's or Master's degree in Computer Science or a related field, 5+ years of hands-on experience in data engineering or backend development, expert-level Python skills, experience with building APIs using frameworks like FastAPI, Graphene, or Strawberry, proficiency in Docker, Kubernetes, SQL, and data modeling, good communication skills, familiarity with data orchestration tools, experience with streaming data platforms like Kafka or Spark, knowledge of data governance, security, and observability best practices, and exposure to cloud platforms like AWS, GCP, or Azure. If you are proactive, self-driven, and possess the required technical skills, then this Data Engineer position is an exciting opportunity for you to contribute to the development of cutting-edge data solutions at D Square Consulting Services Pvt Ltd.,

Posted 1 week ago

Apply

3.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

As an Enterprise Architect & AI Expert, your role will involve defining and maintaining the enterprise architecture framework, standards, and governance. You will align IT strategy with business goals to ensure architectural integrity across systems and platforms. Leading the development of roadmaps for cloud, data, application, and infrastructure architectures will be a key responsibility. It will also be crucial to evaluate and select technologies, platforms, and tools that align with enterprise goals. You will be responsible for designing and implementing AI/ML solutions to solve complex business problems. Leading AI initiatives such as NLP, computer vision, predictive analytics, and generative AI will be part of your duties. Collaborating with data scientists, engineers, and business stakeholders to deploy AI models at scale will be essential. Ensuring ethical AI practices, data governance, and compliance with regulatory standards will also be critical. In terms of leadership and collaboration, you will act as a strategic advisor to senior leadership on technology trends and innovation. Mentoring cross-functional teams, promoting architectural best practices, and facilitating enterprise-wide workshops and architecture review boards will be part of your role. To qualify for this position, you should have a Bachelors or Masters degree in Computer Science, Engineering, or a related field. You should possess 14+ years of experience in enterprise architecture, with at least 3 years in AI/ML domains. Proven experience with cloud platforms such as AWS, Azure, GCP, microservices, and API management is required. Strong knowledge of AI/ML frameworks like TensorFlow, PyTorch, Scikit-learn, and MLOps practices is essential. Familiarity with data architecture, data lakes, and real-time analytics platforms is also expected. Excellent communication, leadership, and stakeholder management skills are necessary for this role. Mandatory skills for this position include experience in designing GenAI and RAG architectures, familiarity with AWS, Vector DB - Milvus (preferred), OpenAI or Claude, LangChain, and LlamaIndex. Thank you for considering this opportunity. Siva,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

noida, uttar pradesh

On-site

You are a skilled and motivated AI Developer with over 3+ years of hands-on experience in building, deploying, and optimizing AI/ML models. Your expertise includes strong proficiency in Python, Scikit-learn, machine learning algorithms, and practical experience of Azure AI services, Azure AI foundry, Copilot Studio, and Dataverse are mandatory. You will be responsible for designing intelligent solutions using modern deep learning and neural network architectures, integrated into scalable cloud-based environments. Your key responsibilities will include utilizing Azure AI Foundry and Copilot Studio to build AI-driven solutions that can be embedded within enterprise workflows. You will design, develop, and implement AI/ML models using Python, Scikit-learn, and modern deep learning frameworks. Additionally, you will build and optimize predictive models using structured and unstructured data from data lakes and other enterprise sources. Collaborating with data engineers to process and transform data pipelines across Azure-based environments, you will develop and integrate applications with Microsoft Dataverse for intelligent business process automation. Applying best practices in data structures and algorithm design, you will ensure high performance and scalability of AI applications. Your role will involve training, testing, and deploying machine learning, deep learning, and neural network models in production environments. Furthermore, you will be responsible for ensuring model governance, performance monitoring, and continuous learning using Azure MLOps pipelines. Collaborating cross-functionally with data scientists, product teams, and cloud architects, you will drive AI innovation within the organization. As a qualified candidate, you hold a Bachelors or Masters degree in Computer Science, Data Science, AI/ML, or a related field. With 3+ years of hands-on experience in AI/ML development, you possess practical experience with Copilot Studio and Microsoft Dataverse integrations. Expertise in Microsoft Azure is essential, particularly with services such as Azure Machine Learning, Azure Data Lake, and Azure AI Foundry. Proficiency in Python and machine learning libraries like Scikit-learn, Pandas, and NumPy is required. A solid understanding of data structures, algorithms, and object-oriented programming is essential, along with experience in data lakes, data pipelines, and large-scale data processing. Your deep understanding of neural networks, deep learning frameworks (e.g., TensorFlow, PyTorch), and model tuning will be valuable in this role. Familiarity with MLOps practices and lifecycle management on cloud platforms is beneficial. Strong problem-solving abilities, communication skills, and team collaboration are important attributes for this position. Preferred qualifications include Azure AI or Data Engineering certification, experience in deploying AI-powered applications in enterprise or SaaS environments, knowledge of generative AI or large language models (LLMs), and exposure to REST APIs, CI/CD pipelines, and version control systems like Git.,

Posted 1 week ago

Apply

14.0 - 18.0 years

0 Lacs

karnataka

On-site

The AVP Databricks Squad Delivery Lead position is open for candidates with 14+ years of experience in Bangalore/Hyderabad/NCR/Kolkata/Mumbai/Pune. As the Databricks Squad Delivery Lead, you will be responsible for overseeing project delivery, team leadership, architecture reviews, and client engagement. Your role will involve optimizing Databricks implementations across cloud platforms like AWS, Azure, and GCP, while leading cross-functional teams. You will lead and manage end-to-end delivery of Databricks-based solutions, serving as a subject matter expert in Databricks architecture, implementation, and optimization. Collaboration with architects and engineers to design scalable data pipelines and analytics platforms will be a key aspect of your responsibilities. Additionally, you will oversee Databricks workspace setup, performance tuning, and cost optimization, while acting as the primary point of contact for client stakeholders. Driving innovation through the implementation of best practices, tools, and technologies, and ensuring alignment between business goals and technical solutions will also be part of your duties. The ideal candidate for this role must possess a Bachelor's degree in Computer Science, Engineering, or equivalent (Masters or MBA preferred) along with hands-on experience in delivering data engineering/analytics projects using Databricks. Experience in managing cloud-based data pipelines on AWS, Azure, or GCP, strong leadership skills, and effective client-facing communication are essential requirements. Preferred skills include proficiency with Spark, Delta Lake, MLflow, and distributed computing, expertise in data engineering concepts such as ETL, data lakes, and data warehousing, and certifications in Databricks or cloud platforms (AWS/Azure/GCP) as a plus. An Agile/Scrum or PMP certification will be considered an added advantage for this role.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

kolkata, west bengal

On-site

Genpact is a global professional services and solutions firm dedicated to delivering outcomes that shape the future. With over 125,000 employees spanning across 30+ countries, we are deeply motivated by our curiosity, agility, and the desire to create enduring value for our clients. We are driven by our purpose - the relentless pursuit of a world that works better for people. We cater to and transform leading enterprises, including the Fortune Global 500, leveraging our profound business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently seeking applications for the position of Assistant Vice President, Databricks Squad Delivery Lead. As the Databricks Delivery Lead, you will be responsible for overseeing the complete delivery of Databricks-based solutions for our clients. Your role will involve ensuring the successful implementation, optimization, and scaling of big data and analytics solutions. You will play a crucial role in promoting the adoption of Databricks as the preferred platform for data engineering and analytics, while effectively managing a diverse team of data engineers and developers. Your key responsibilities will include: - Leading and managing Databricks-based project delivery, ensuring that all solutions adhere to client requirements, best practices, and industry standards. - Serving as the subject matter expert (SME) on Databricks, offering guidance to teams on architecture, implementation, and optimization. - Collaborating with architects and engineers to design optimal solutions for data processing, analytics, and machine learning workloads. - Acting as the primary point of contact for clients, ensuring alignment between business requirements and technical delivery. - Maintaining effective communication with stakeholders, providing regular updates on project status, risks, and achievements. - Overseeing the setup, deployment, and optimization of Databricks workspaces, clusters, and pipelines. - Ensuring that Databricks solutions are optimized for cost and performance, utilizing best practices for data storage, processing, and querying. - Continuously evaluating the effectiveness of the Databricks platform and processes, and proposing improvements or new features to enhance delivery efficiency and effectiveness. - Driving innovation within the team by introducing new tools, technologies, and best practices to improve delivery quality. Qualifications we are looking for: Minimum Qualifications / Skills: - Bachelor's degree in Computer Science, Engineering, or a related field (Masters or MBA preferred). - Relevant years of experience in IT services with a specific focus on Databricks and cloud-based data engineering. Preferred Qualifications / Skills: - Demonstrated experience in leading end-to-end delivery of data engineering or analytics solutions on Databricks. - Strong expertise in cloud technologies (AWS, Azure, GCP), data pipelines, and big data tools. - Hands-on experience with Databricks, Spark, Delta Lake, MLflow, and related technologies. - Proficiency in data engineering concepts, including ETL, data lakes, data warehousing, and distributed computing. Preferred Certifications: - Databricks Certified Associate or Professional. - Cloud certifications (AWS Certified Solutions Architect, Azure Data Engineer, or equivalent). - Certifications in data engineering, big data technologies, or project management (e.g., PMP, Scrum Master). If you are passionate about driving innovation, leading a high-performing team, and shaping the future of data engineering and analytics, we welcome you to apply for this exciting opportunity of Assistant Vice President, Databricks Squad Delivery Lead at Genpact.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

hyderabad, telangana

On-site

As a Lead Software Engineer at JPMorgan Chase within the Consumer & Community Banking Team, you play a crucial role in an agile team dedicated to enhancing, building, and delivering trusted market-leading technology products with a focus on security, stability, and scalability. Your responsibilities include executing creative software solutions, designing, developing, and troubleshooting technical issues with an innovative mindset. You are expected to develop high-quality, secure production code, review and debug code by team members, and identify opportunities to automate remediation processes for enhanced operational stability. In this role, you will lead evaluation sessions with external vendors, startups, and internal teams to assess architectural designs and technical applicability within existing systems. Additionally, you will drive awareness and adoption of new technologies within Software Engineering communities, contributing to a diverse, inclusive, and respectful team culture. To excel in this position, you should possess formal training or certification in software engineering concepts along with at least 5 years of practical experience. Strong proficiency in database systems, including SQL & NoSQL, and programming languages like Python, Java, or Scala is essential. Experience in data architecture, data modeling, data warehousing, and data lakes, as well as implementing complex ETL transformations on big data platforms, will be beneficial. Proficiency in the Software Development Life Cycle and agile methodologies such as CI/CD, Application Resiliency, and Security is required. An ideal candidate will have hands-on experience with software applications and technical processes within a specific discipline (e.g., cloud, artificial intelligence, machine learning) and a background in the financial services industry. Practical experience in cloud-native technologies is highly desirable. Additional qualifications such as Java and data programming experience are considered a plus for this role.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As a PySpark Data Engineer, you will play a crucial role in developing robust data processing and transformation solutions within our data platform. Your responsibilities will include designing, implementing, and maintaining PySpark-based applications to handle complex data processing tasks, ensuring data quality, and integrating with diverse data sources. To excel in this role, you should possess strong PySpark development skills, experience with big data technologies, and the ability to thrive in a fast-paced, data-driven environment. Your primary responsibilities will involve designing, developing, and testing PySpark-based applications to process, transform, and analyze large-scale datasets from various sources such as relational databases, NoSQL databases, batch files, and real-time data streams. You will need to implement efficient data transformation and aggregation techniques using PySpark and relevant big data frameworks, as well as develop robust error handling and exception management mechanisms to maintain data integrity and system resilience within Spark jobs. Additionally, optimizing PySpark jobs for performance through techniques like partitioning, caching, and tuning of Spark configurations will be essential. Collaboration will be key in this role, as you will work closely with data analysts, data scientists, and data architects to understand data processing requirements and deliver high-quality data solutions. By analyzing and interpreting data structures, formats, and relationships, you will implement effective data transformations using PySpark and work with distributed datasets in Spark to ensure optimal performance for large-scale data processing and analytics. In terms of data integration and ETL processes, you will design and implement ETL (Extract, Transform, Load) processes to ingest and integrate data from various sources, ensuring consistency, accuracy, and performance. Integration of PySpark applications with data sources such as SQL databases, NoSQL databases, data lakes, and streaming platforms will also be a part of your responsibilities. To excel in this role, you should possess a Bachelor's degree in Computer Science, Information Technology, or a related field, along with 5+ years of hands-on experience in big data development, preferably with exposure to data-intensive applications. A strong understanding of data processing principles, techniques, and best practices in a big data environment is essential, as well as proficiency in PySpark, Apache Spark, and related big data technologies for data processing, analysis, and integration. Experience with ETL development and data pipeline orchestration tools such as Apache Airflow and Luigi will be advantageous. Strong analytical and problem-solving skills, along with excellent communication and collaboration abilities, will also be critical for success in this role.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

chennai, tamil nadu

On-site

We are looking for a highly motivated and experienced Data and Analytics Senior Architect to lead our Master Data Management (MDM) and Data Analytics team. As the Data and Analytics Architect Lead, you will be responsible for defining and implementing the overall data architecture strategy to ensure alignment with business goals and support data-driven decision-making. Your role will involve designing scalable, secure, and efficient data systems, including databases, data lakes, and data warehouses. You will evaluate and recommend tools and technologies for data integration, processing, storage, and analytics while staying updated on industry trends. You will lead a high-performing team, fostering a collaborative and innovative culture, and ensuring data integrity, consistency, and availability across the organization. You will manage the existing MDM solution and data platform based on Microsoft Data Lake Gen 2, Snowflake as the DWH, and Power BI managing data from core applications. Additionally, you will drive further development to handle additional data and capabilities to support our AI journey. The ideal candidate will possess strong leadership skills, a deep understanding of data management and technology principles, and the ability to collaborate effectively across different departments and functions. **Principle Duties and Responsibilities:** **Team Leadership:** - Lead, mentor, and develop a high-performing team of data analysts and MDM specialists. - Foster a collaborative and innovative team culture that encourages continuous improvement and efficiency. - Provide technical leadership and guidance to the development teams and oversee the implementation of IT solutions. **Architect:** - Define the overall data architecture strategy, aligning it with business goals and ensuring it supports data-driven decision-making. - Identify, evaluate, and establish shared enabling technical capabilities for the division in collaboration with IT to ensure consistency, quality, and business value. - Design and oversee the implementation of data systems, including databases, data lakes, and data warehouses, ensuring they are scalable, secure, efficient, and cost-effective. - Evaluate and recommend tools and technologies for data integration, processing, storage, and analytics, staying updated on industry trends. **Strategic Planning:** - Develop and implement the MDM and analytics strategy aligned with the overall team and organizational goals. - Work with the Enterprise architect to align on the overall strategy and application landscape to ensure MDM and data analytics fit into the ecosystem. - Identify opportunities to enhance data quality, governance, and analytics capabilities. **Project Management:** - Oversee project planning, execution, and delivery to ensure timely and successful completion of initiatives. - Monitor project progress and cost, identify risks, and implement mitigation strategies. **Stakeholder Engagement:** - Collaborate with cross-functional teams to understand data needs and deliver solutions that support business objectives. - Serve as a key point of contact for data-related inquiries and support requests. - Develop business cases and proposals for IT investments and present them to senior management and stakeholders. **Data/Information Governance:** - Establish and enforce data/information governance policies and standards to ensure compliance and data integrity. - Champion best practices in data management and analytics across the organization. **Reporting and Analysis:** - Utilize data analytics to derive insights and support decision-making processes. - Document and present findings and recommendations to senior management. **Knowledge, Skills and Abilities Required:** - Bachelor's degree in computer science, Data Science, Information Management, or a related field; master's degree preferred. - 10+ years of experience in data management, analytics, or a related field, with at least 2 years in a leadership role. - Strong knowledge of master data management concepts, data governance, data technology, and analytics tools. - Proficiency in data modeling, ETL processes, database management, big data technologies, and data integration techniques. - Excellent project management skills with a proven track record of delivering complex projects on time and within budget. - Strong analytical, problem-solving, and decision-making abilities. - Exceptional communication and interpersonal skills. - Team player, result-oriented, structured, with attention to detail and a strong work ethic. **Special Competencies required:** - Proven leader with excellent structural skills, good at documenting and presenting. - Strong executional skills to make things happen, not just generate ideas. - Experience in working with analytics tools and data ingestion platforms. - Experience in working with MDM solutions and preferably TIBCO EBX. - Experience in working with Jira/Confluence. **Additional Information:** - Office, remote, or hybrid working. - Ability to function within variable time zones. - International travel may be required.,

Posted 1 week ago

Apply

10.0 - 14.0 years

0 Lacs

kolkata, west bengal

On-site

You are a highly skilled and strategic Data Architect with deep expertise in the Azure Data ecosystem. Your role will involve defining and driving the overall Azure-based data architecture strategy aligned with enterprise goals. You will architect and implement scalable data pipelines, data lakes, and data warehouses using Azure Data Lake, ADF, and Azure SQL/Synapse. Providing technical leadership on Azure Databricks for large-scale data processing and advanced analytics use cases is a crucial aspect of your responsibilities. Integrating AI/ML models into data pipelines and supporting the end-to-end ML lifecycle including training, deployment, and monitoring will be part of your day-to-day tasks. Collaboration with cross-functional teams such as data scientists, DevOps engineers, and business analysts is essential. You will evaluate and recommend tools, platforms, and design patterns for data and ML infrastructure while mentoring data engineers and junior architects on best practices and architectural standards. Your role will require a strong background in data modeling, ETL/ELT frameworks, and data warehousing concepts. Proficiency in SQL, Python, PySpark, and a solid understanding of AI/ML workflows and tools are necessary. Exposure to Azure DevOps and excellent communication and stakeholder management skills are also key requirements. As a Data Architect at Lexmark, you will play a vital role in designing and overseeing robust, scalable, and secure data architectures to support advanced analytics and machine learning workloads. If you are an innovator looking to make your mark with a global technology leader, apply now to join our team in Kolkata, India.,

Posted 2 weeks ago

Apply

14.0 - 18.0 years

0 Lacs

pune, maharashtra

On-site

You are hiring for the position of AVP - Databricks with a minimum of 14 years of experience. The role is based in Bangalore/Hyderabad/NCR/Kolkata/Mumbai/Pune. As an AVP - Databricks, your responsibilities will include leading and managing Databricks-based project delivery to ensure solutions are designed, developed, and implemented according to client requirements and industry standards. You will act as the subject matter expert on Databricks, providing guidance on architecture, implementation, and optimization to teams. Collaboration with architects and engineers to design optimal solutions for data processing, analytics, and machine learning workloads is also a key aspect of the role. You will serve as the primary point of contact for clients to ensure alignment between business requirements and technical delivery. The qualifications we seek in you include a Bachelor's degree in Computer Science, Engineering, or a related field (Masters or MBA preferred). You should have relevant years of experience in IT services with a specific focus on Databricks and cloud-based data engineering. Preferred qualifications/skills for this role include proven experience in leading end-to-end delivery, solution and architecture of data engineering or analytics solutions on Databricks. Strong experience in cloud technologies such as AWS, Azure, GCP, data pipelines, and big data tools is desirable. Hands-on experience with Databricks, Spark, Delta Lake, MLflow, and related technologies is a plus. Expertise in data engineering concepts including ETL, data lakes, data warehousing, and distributed computing will be beneficial for this role.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

thane, maharashtra

On-site

As the BI / BW Lead at DMart, you will lead and manage a dedicated SAP BW team to ensure the timely delivery of reports, dashboards, and analytics solutions. Your role will involve managing the team effectively, overseeing all SAP BW operational support tasks and development projects with a focus on high quality and efficiency. You will be responsible for maintaining the stability and performance of the SAP BW environment, managing daily support activities, and ensuring seamless data flow and reporting across the organization. Acting as the bridge between business stakeholders and your technical team, you will play a crucial role in enhancing DMart's data ecosystem. You should possess a Bachelor's or Master's degree in Computer Science, Information Systems, Engineering, or a related field. While SAP BW certifications are preferred, they are not mandatory. Key Responsibilities: - Lead and manage the SAP BW & BOBJ team, ensuring efficient workload distribution and timely task completion. - Oversee the daily operational support of the SAP BW & BOBJ environment to maintain stability and performance. - Provide direction and guidance to the team for issue resolution, data loads, and reporting accuracy. - Serve as the primary point of contact for business users and internal teams regarding SAP BW support and enhancements. - Ensure the team follows best practices in monitoring, error handling, and performance optimization. - Drive continuous improvement of support processes, tools, and methodologies. - Proactively identify risks and bottlenecks in data flows and take corrective actions. - Ensure timely delivery of data extracts, reports, and dashboards for critical business decisions. - Provide leadership in system upgrades, patching, and data model improvements. - Facilitate knowledge sharing and skill development within the SAP BW team. - Maintain high standards of data integrity and security in the BW environment. Professional Skills: - Strong functional and technical understanding of SAP BW / BW on HANA & BOBJ. - At least 5 years of working experience with SAP Analytics. - Solid knowledge of ETL processes and data extraction. - Experience with Data lakes such as Snowflake, Big Query, Data bricks, and Dashboard tools like Power BI, Tableau is advantageous. - Experience in Retail, CPG, or SCM is a plus. - Experience in managing SAP BW support activities and coordinating issue resolution. - Strong stakeholder management skills with the ability to translate business needs into technical actions. - Excellent problem-solving and decision-making abilities under pressure.,

Posted 2 weeks ago

Apply

7.0 - 14.0 years

2 - 6 Lacs

Remote, , India

On-site

Job Description Role and Responsibilities : Handle day-to-day administration tasks such as configuring clusters and workspaces, Monitor platform health, troubleshoot issues, and perform routine maintenance and upgrades. Evaluate new features and enhancements introduced by Databricks from Security, Compliance and manageability prospective Implement and maintain security controls to protect the Databricks platform and the data within it. Collaborate with the security team to ensure compliance with data privacy and regulatory requirements. Develop and enforce governance policies and practices, including access management, data retention, and data classification. Optimize the platform's performance by monitoring resource utilization, identifying and resolving bottlenecks, and fine-tuning configurations for optimal performance. Collaborate with infrastructure and engineering teams to ensure that the platform meets the scalability and availability requirements. Work closely with data analysts, data scientists, and other users to understand their requirements and provide technical support Automate platform deployment, configuration, and monitoring processes using scripting languages and automation tools. Collaborate with the DevOps team to integrate the Databricks platform into the overall infrastructure and CI/CD pipelines. What we Look for : 7+ years of experience with Big Data Technologies such as Apache Spark, cloud native Data lakes and Data mesh platforms technical Architecture or consulting role Strong experience in administering and managing Databricks or other big data platforms - AWS cloud Python programming Skills in technical areas which support deployment and integration of Databricks based solutions. Understanding latest services offered by Databricks and evaluation of those services and understanding how these services can fit into the platform

Posted 2 weeks ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

At PwC, our team in managed services specializes in providing outsourced solutions and supporting clients across various functions. We help organizations enhance their operations, reduce costs, and boost efficiency by managing key processes and functions on their behalf. Our expertise lies in project management, technology, and process optimization, allowing us to deliver high-quality services to our clients. In managed service management and strategy at PwC, the focus is on transitioning and running services, managing delivery teams, programs, commercials, performance, and delivery risk. Your role will involve continuous improvement and optimization of managed services processes, tools, and services. As a Managed Services - Data Engineer Senior Associate at PwC, you will be part of a team of problem solvers dedicated to addressing complex business issues from strategy to execution using Data, Analytics & Insights Skills. Your responsibilities will include using feedback and reflection to enhance self-awareness and personal strengths, acting as a subject matter expert in your chosen domain, mentoring junior resources, and conducting knowledge sharing sessions. You will be required to demonstrate critical thinking, ensure quality of deliverables, adhere to SLAs, and participate in incident, change, and problem management. Additionally, you will be expected to review your work and that of others for quality, accuracy, and relevance, as well as demonstrate leadership capabilities by working directly with clients and leading engagements. The primary skills required for this role include ETL/ELT, SQL, SSIS, SSMS, Informatica, and Python, with secondary skills in Azure/AWS/GCP, Power BI, Advanced Excel, and Excel Macro. As a Data Ingestion Senior Associate, you should have extensive experience in developing scalable, repeatable, and secure data structures and pipelines, designing and implementing ETL processes, monitoring and troubleshooting data pipelines, implementing data security measures, and creating visually impactful dashboards for data reporting. You should also have expertise in writing and analyzing complex SQL queries, be proficient in Excel, and possess strong communication, problem-solving, quantitative, and analytical abilities. In our Managed Services platform, we focus on leveraging technology and human expertise to deliver simple yet powerful solutions to our clients. Our team of skilled professionals, combined with advanced technology and processes, enables us to provide effective outcomes and add greater value to our clients" enterprises. We aim to empower our clients to focus on their business priorities by providing flexible access to world-class business and technology capabilities that align with today's dynamic business environment. If you are a candidate who thrives in a high-paced work environment, capable of handling critical Application Evolution Service offerings, engagement support, and strategic advisory work, then we are looking for you to join our team in the Data, Analytics & Insights Managed Service at PwC. Your role will involve working on a mix of help desk support, enhancement and optimization projects, as well as strategic roadmap initiatives, while also contributing to customer engagements from both a technical and relationship perspective.,

Posted 2 weeks ago

Apply

8.0 - 13.0 years

18 - 33 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Data Modular: JD details Work Location: Bangalore/Chenna/Hyderabad/Gurgaon/Pune Mode of work: Hybrid Exp Level- 7+ Years Looking only for immediate joiners. Experience with RDBMS, No-SQL (Columnar DB like Apache Parquet, Apache Kylin or alike) - Query optimization, performance tuning, caching and filtering strategies. Experience with Data lakes and faster retrieving processes and techniques Dynamic data modelling - enable updated data models based on underlying data Caching and filtering techniques on data Experience with Apache Spark or similar big data technologies Knowledge on AWS - IaaC implementation SQL transpilers and predicate pushing The top layer is GraphQL - good to have

Posted 2 weeks ago

Apply

8.0 - 12.0 years

0 Lacs

karnataka

On-site

A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organisations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. As part of our Analytics and Insights Consumption team, you'll analyze data to drive useful insights for clients to address core business issues or to drive strategic outcomes. You'll use visualization, statistical and analytics models, AI/ML techniques, Modelops and other techniques to develop these insights. Candidates with 8+ years of hands-on experience are invited to join our team as we embark on a journey to drive innovation and change through data-driven solutions. Responsibilities: - Lead and manage a team of software engineers in developing, implementing, and maintaining advanced software solutions for GenAI projects. - Engage with senior leadership and cross-functional teams to gather business requirements, identify opportunities for technological enhancements, and ensure alignment with organizational goals. - Design and implement sophisticated event-driven architectures to support real-time data processing and analysis. - Oversee the use of containerization technologies such as Kubernetes to promote efficient deployment and scalability of software applications. - Supervise the development and management of extensive data lakes, ensuring effective storage and handling of large volumes of structured and unstructured data. - Champion the use of Python as the primary programming language, setting high standards for software development within the team. - Facilitate close collaboration between software engineers, data scientists, data engineers, and DevOps teams to ensure seamless integration and deployment of GenAI models. - Maintain a cutting-edge knowledge base in GenAI technologies to drive innovation and enhance software engineering processes continually. - Translate complex business needs into robust technical solutions, contributing to strategic decision-making processes. - Establish and document software engineering processes, methodologies, and best practices, promoting a culture of excellence. - Ensure continuous professional development of the team by maintaining and acquiring new solution architecture certificates and adhering to industry best practices.,

Posted 2 weeks ago

Apply

14.0 - 18.0 years

0 Lacs

karnataka

On-site

As the AVP Databricks Squad Delivery Lead, you will play a crucial role in overseeing project delivery, team leadership, architecture reviews, and client engagement. Your primary responsibility will be to optimize Databricks implementations across cloud platforms such as AWS, Azure, and GCP, while leading cross-functional teams. You will lead and manage the end-to-end delivery of Databricks-based solutions. Your expertise as a subject matter expert (SME) in Databricks architecture, implementation, and optimization will be essential. Collaborating with architects and engineers, you will design scalable data pipelines and analytics platforms. Additionally, you will oversee Databricks workspace setup, performance tuning, and cost optimization. Acting as the primary point of contact for client stakeholders, you will ensure effective communication and alignment between business goals and technical solutions. Driving innovation within the team, you will implement best practices, tools, and technologies to enhance project delivery. The ideal candidate should possess a Bachelor's degree in Computer Science, Engineering, or equivalent (Masters or MBA preferred). Hands-on experience in delivering data engineering/analytics projects using Databricks and managing cloud-based data pipelines on AWS, Azure, or GCP is a must. Strong leadership skills and excellent client-facing communication are essential for this role. Preferred skills include proficiency with Spark, Delta Lake, MLflow, and distributed computing. Expertise in data engineering concepts such as ETL, data lakes, and data warehousing is highly desirable. Certifications in Databricks or cloud platforms (AWS/Azure/GCP) and Agile/Scrum or PMP certification are considered advantageous.,

Posted 2 weeks ago

Apply

7.0 - 11.0 years

0 Lacs

haryana

On-site

As an Assistant Manager - MIS Reporting at Axis Max Life Insurance, you will play a crucial role in driving the business intelligence team towards a data-driven culture and leading the transformation towards automation and real-time insights. You will be responsible for ensuring the accurate and timely delivery of reports and dashboards while coaching and mentoring a team of professionals to enhance their skills and capabilities. Your key responsibilities will include handling distribution reporting requirements, supporting CXO reports and dashboards, driving data democratization, collaborating to design data products, and partnering with the data team to build necessary data infrastructure. You will lead a team of 10+ professionals, including partners, and work closely with distribution leaders to understand key metrics and information needs to develop business intelligence products that cater to those needs. Additionally, you will define the vision and roadmap for the business intelligence team, championing a data culture within Max Life and accelerating the journey towards becoming a data-driven organization. To excel in this role, you should possess a Master's degree in a quantitative field, along with at least 7-8 years of relevant experience in working with business reporting teams in the financial services sector. Proficiency in tools like Python and PowerBI is essential, as well as demonstrated experience in working with senior leadership, standardizing and automating business reporting, and technical proficiency in BI tech stack. Strong interpersonal skills, excellent verbal and written communication abilities, and a deep understanding of data architecture, data warehousing, and data lakes are also required for this position. If you are passionate about leading change, driving efficiency, and rationalizing information overload, we are looking for you to join our team at Axis Max Life Insurance.,

Posted 2 weeks ago

Apply

14.0 - 18.0 years

0 Lacs

karnataka

On-site

You are hiring for the role of AVP - Databricks with a requirement of minimum 14+ years of experience. The job location can be in Bangalore, Hyderabad, NCR, Kolkata, Mumbai, or Pune. As an AVP - Databricks, your responsibilities will include leading and managing Databricks-based project delivery to ensure that all solutions meet client requirements, best practices, and industry standards. You will serve as a subject matter expert (SME) on Databricks, providing guidance to teams on architecture, implementation, and optimization. Collaboration with architects and engineers to design optimal solutions for data processing, analytics, and machine learning workloads will also be part of your role. Additionally, you will act as the primary point of contact for clients, ensuring alignment between business requirements and technical delivery. We are looking for a candidate with a Bachelor's degree in Computer Science, Engineering, or a related field (Masters or MBA preferred) with relevant years of experience in IT services, specifically in Databricks and cloud-based data engineering. Proven experience in leading end-to-end delivery and solution architecting of data engineering or analytics solutions on Databricks is a plus. Strong expertise in cloud technologies such as AWS, Azure, GCP, data pipelines, and big data tools is desired. Hands-on experience with Databricks, Spark, Delta Lake, MLflow, and related technologies is a requirement. An in-depth understanding of data engineering concepts including ETL, data lakes, data warehousing, and distributed computing will be beneficial for this role.,

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

thane, maharashtra

On-site

As the BI/BW Lead at DMart, you will be responsible for leading and managing a dedicated SAP BW team to ensure timely delivery of Reports, Dashboards, and analytics solutions. Your role will focus on managing the team effectively, ensuring that all SAP BW operational support tasks and Development projects are completed with high quality and efficiency. You will also be responsible for the stability and performance of the SAP BW environment, overseeing daily support activities, and ensuring seamless data flow and reporting across the organization. Acting as the bridge between business stakeholders and your technical team, you will play a pivotal role in maintaining and enhancing DMart's data ecosystem. Your educational qualifications should include a Bachelors/Masters Degree in Computer Science, Information Systems, Engineering, or a related field. While SAP BW certifications are preferred, they are not mandatory. Key Responsibilities: - Lead and manage the SAP BW & BOBJ team to ensure efficient workload distribution and timely completion of tasks. - Oversee the daily operational support of the SAP BW & BOBJ environment, ensuring stability and performance. - Provide direction and guidance to the team for issue resolution, data loads, and reporting accuracy. - Act as the primary point of contact for business users and internal teams regarding SAP BW support and enhancements. - Ensure the team follows best practices in monitoring, error handling, and performance optimization. - Drive the continuous improvement of support processes, tools, and methodologies. - Proactively identify potential risks and bottlenecks in data flows and take corrective actions. - Ensure timely delivery of data extracts, reports, and dashboards for business-critical decisions. - Provide leadership in system upgrades, patching, and data model improvements. - Facilitate knowledge sharing and skill development within the SAP BW team. - Maintain high standards of data integrity and security in the BW environment. Professional Skills: - Strong functional and technical understanding of SAP BW/BW on HANA & BOBJ. - At least 5 years of working experience on SAP Analytics. - Solid understanding of ETL processes and data extraction. - Experience working on Data lakes like Snowflake, Big Query, Data bricks, and Dashboard tools like Power BI, Tableau would be an added advantage. - Experience working in Retail, CPG, or SCM would be an added advantage. - Experience in managing SAP BW support activities and coordinating issue resolution. - Strong stakeholder management skills with the ability to translate business needs into technical actions. - Excellent problem-solving and decision-making abilities under pressure.,

Posted 2 weeks ago

Apply

7.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

As a Business Analyst with 7-14 years of experience, you will be responsible for various tasks including Business Requirement Documents (BRD) and Functional Requirement Documents (FRD) creation, Stakeholder Management, User Acceptance Testing (UAT), understanding Datawarehouse Concepts, SQL queries, and subqueries, as well as utilizing Data Visualization tools such as Power BI or MicroStrategy. It is essential that you have a deep understanding of the Investment Domain, specifically in areas like Capital markets, Asset management, and Wealth management. Your primary responsibilities will involve working closely with stakeholders to gather requirements, analyzing data, and testing systems to ensure they meet business needs. Additionally, you should have a strong background in investment management or financial services, with experience in areas like Asset management, Investment operations, and Insurance. Your familiarity with concepts like Critical Data Elements (CDEs), data traps, and reconciliation workflows will be beneficial in this role. Technical expertise in BI and analytics tools like Power BI, Tableau, and MicroStrategy is required, along with proficiency in SQL. You should also possess excellent communication skills, analytical thinking capabilities, and the ability to engage effectively with stakeholders. Experience in working within Agile/Scrum environments with cross-functional teams is highly valued. In terms of technical skills, you should demonstrate proven abilities in analytical problem-solving, with a deep knowledge of investment data platforms such as Golden Source, NeoXam, RIMES, and JPM Fusion. Expertise in cloud data technologies like Snowflake, Databricks, and AWS/GCP/Azure data services is essential. Understanding data governance frameworks, metadata management, and data lineage is crucial, along with compliance standards in the investment management industry. Hands-on experience with Investment Book of Records (IBORs) like Blackrock Alladin, CRD, Eagle STAR (ABOR), Eagle Pace, and Eagle DataMart is preferred. Familiarity with investment data platforms including Golden Source, FINBOURNE, NeoXam, RIMES, and JPM Fusion, as well as cloud data platforms like Snowflake and Databricks, will be advantageous. Your background in data governance, metadata management, and data lineage frameworks will be essential in ensuring data accuracy and compliance within the organization.,

Posted 2 weeks ago

Apply

2.0 - 15.0 years

0 Lacs

noida, uttar pradesh

On-site

You are a highly skilled and experienced professional tasked with leading and supporting data warehousing and data center architecture initiatives. Your expertise in Data Warehousing, Data Lakes, Data Integration, and Data Governance, along with hands-on experience in ETL tools and cloud platforms such as AWS, Azure, GCP, and Snowflake, will be crucial for this role. You are expected to have a strong presales experience, technical leadership capabilities, and the ability to manage complex enterprise deals across various geographies. Your main responsibilities will include architecting and designing scalable Data Warehousing and Data Lake solutions, leading presales engagements, creating and presenting proposals and solution designs to clients, collaborating with cross-functional teams, estimating efforts and resources for customer requirements, driving Managed Services opportunities and enterprise deal closures, engaging with clients globally, ensuring alignment of solutions with business goals and technical requirements, and maintaining high standards of documentation and presentation for client-facing materials. To excel in this role, you must possess a Bachelors or Masters degree in Computer Science, Information Technology, or a related field. Certifications in AWS, Azure, GCP, or Snowflake are advantageous. You should have experience working in consulting or system integrator environments, a strong understanding of Data Warehousing, Data Lakes, Data Integration, and Data Governance, hands-on experience with ETL tools, exposure to cloud environments, a minimum of 2 years of presales experience, experience in enterprise-level deals and Managed Services, the ability to handle multi-geo engagements, excellent presentation and communication skills, and a solid grasp of effort estimation techniques for customer requirements.,

Posted 2 weeks ago

Apply

5.0 - 15.0 years

0 Lacs

noida, uttar pradesh

On-site

HCLTech is seeking a Data and AI Principal / Senior Manager (Generative AI) for their Noida location. As a global technology company with a workforce of over 218,000 employees in 59 countries, HCLTech specializes in digital, engineering, cloud, and AI solutions. The company collaborates with clients across various industries such as Financial Services, Manufacturing, Life Sciences, Healthcare, Technology, Telecom, Retail, and Public Services, offering innovative technology services and products. With consolidated revenues of $13.7 billion as of the 12 months ending September 2024, HCLTech aims to drive progress and transformation for its clients globally. Key Responsibilities: In this role, you will be responsible for providing hands-on technical leadership and oversight, including leading the design of AI and GenAI solutions, machine learning pipelines, and data architectures. You will actively contribute to coding, solution design, and troubleshooting critical components, collaborating with Account Teams, Client Partners, and Domain SMEs to ensure technical solutions align with business needs. Additionally, you will mentor and guide engineers across various functions to foster a collaborative and high-performance team environment. As part of the role, you will design and implement system and API architectures, integrating microservices, RESTful APIs, cloud-based services, and machine learning models seamlessly into GenAI and data platforms. You will lead the integration of AI, GenAI, and Agentic applications, NLP models, and large language models into scalable production systems. You will also architect ETL pipelines, data lakes, and data warehouses using tools like Apache Spark, Airflow, and Google BigQuery, and drive deployment using cloud platforms such as AWS, Azure, and GCP. Furthermore, you will lead the design and deployment of machine learning models using frameworks like PyTorch, TensorFlow, and scikit-learn, ensuring accurate and reliable outputs. You will develop prompt engineering techniques for GenAI models and implement best practices for ML model performance monitoring and continuous training. The role also involves expertise in CI/CD pipelines, Infrastructure-as-Code, cloud management, stakeholder communication, agile development, performance optimization, and scalability strategies. Required Qualifications: - 15+ years of hands-on technical experience in software engineering, with at least 5+ years in a leadership role managing cross-functional teams in AI, GenAI, machine learning, data engineering, and cloud infrastructure. - Proficiency in Python and experience with Flask, Django, or FastAPI for API development. - Extensive experience in building and deploying ML models using TensorFlow, PyTorch, scikit-learn, and spaCy, and integrating them into AI frameworks. - Familiarity with ETL pipelines, data lakes, data warehouses, and data processing tools like Apache Spark, Airflow, and Kafka. - Strong expertise in CI/CD pipelines, containerization, Infrastructure-as-Code, and API security for high-traffic systems. If you are interested in this position, please share your profile with the required details including Overall Experience, Skills, Current and Preferred Location, Current and Expected CTC, and Notice Period to paridhnya_dhawankar@hcltech.com.,

Posted 2 weeks ago

Apply

3.0 - 8.0 years

0 Lacs

chennai, tamil nadu

On-site

We are seeking a highly skilled and experienced Snowflake Architect to take charge of designing, developing, and deploying enterprise-grade cloud data solutions. The ideal candidate should possess a solid background in data architecture, cloud data platforms, and Snowflake implementation, along with practical experience in end-to-end data pipeline and data warehouse design. In this role, you will be responsible for leading the architecture, design, and implementation of scalable Snowflake-based data warehousing solutions. You will also be tasked with defining data modeling standards, best practices, and governance frameworks. Collaborating with stakeholders to comprehend data requirements and translating them into robust architectural solutions will be a key part of your responsibilities. Furthermore, you will be required to design and optimize ETL/ELT pipelines utilizing tools like Snowpipe, Azure Data Factory, Informatica, or DBT. Implementing data security, privacy, and role-based access controls within Snowflake is also essential. Providing guidance to development teams on performance tuning, query optimization, and cost management within Snowflake will be part of your duties. Additionally, ensuring high availability, fault tolerance, and compliance across data platforms will be crucial. Mentoring developers and junior architects on Snowflake capabilities is an important aspect of this role. Qualifications and Experience: - 8+ years of overall experience in data engineering, BI, or data architecture, with a minimum of 3+ years of hands-on Snowflake experience. - Expertise in Snowflake architecture, data sharing, virtual warehouses, clustering, and performance optimization. - Strong proficiency in SQL, Python, and cloud data services (e.g., AWS, Azure, or GCP). - Hands-on experience with ETL/ELT tools like ADF, Informatica, Talend, DBT, or Matillion. - Good understanding of data lakes, data mesh, and modern data stack principles. - Experience with CI/CD for data pipelines, DevOps, and data quality frameworks. - Solid knowledge of data governance, metadata management, and cataloging. Desired Skills: - Snowflake certification (e.g., SnowPro Core/Advanced Architect). - Familiarity with Apache Airflow, Kafka, or event-driven data ingestion. - Knowledge of data visualization tools such as Power BI, Tableau, or Looker. - Experience in healthcare, BFSI, or retail domain projects. Please note that this job description is sourced from hirist.tech.,

Posted 2 weeks ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies