Home
Jobs
Companies
Resume

5267 Pyspark Jobs - Page 5

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: โ‚น0
Max: โ‚น10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Designation โ€“ Sr.Consultant Experience- 6 to 7 years Location- Bengaluru Skills Req- Python, SQL, Databrciks , ADF ,within-Databrcisk - DLT, PySpark, Structural streaming , performance and cost optimization. Roles and Responsibilities: Capture business problems, value drivers, and functional/non-functional requirements and translate into functionality. Assess the risks, feasibility, opportunities, and business impact. Assess and model processes, data flows, and technology to understand the current value and issues, and identify opportunities for improvement. Create / update clear documentation of requirements to align with the solution over the project lifecycle. Ensure traceability of requirements from business needs through testing and scope changes, to final solution. Interact with software suppliers, designers and developers to understand software limitations, deliver elements of system and database design, and ensure that business requirements and use cases are handled. Configure and document software and processes, using agreed standards and tools. Create acceptance criteria and validate that solutions meet business needs, through defining and coordinating testing. Create and present compelling business cases to justify solution value and establish approval, funding and prioritization. Initiate, plan, execute, monitor, and control Business Analysis activities on projects within agreed parameters of cost, time and quality. Lead stakeholder management activities and large design sessions. Lead teams to complete business analysis on projects. Configure and document software and processes. Define and coordinate testing. Mandatory skills: Agile project experience. Understand Agile frameworks and tools. Worked in Agile. Educated stakeholders including Product Owners and Business partners in Agile ways of working. Understand systems engineering concepts, data/process analysis and modeling, products & solutions. Degree. 4 - 7 yrs IT. Optional skills: Agile certifications/trainings preferred. CBAP (Certified Business Analysis Professional) or PMI-PBA certification preferred. Lean Practitioner training and experience are an asset. Show more Show less

Posted 19 hours ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

We are looking for a highly skilled and motivated Data Scientist with deep experience in building recommendation systems to join our team. This role demands expertise in deep learning, embedding-based retrieval, and the Google Cloud Platform (GCP). You will play a critical role in developing intelligent systems that enhance user experiences through personalized content discovery. Key Responsibilities: Develop, train, and deploy recommendation models using two-tower, multi-tower, and cross-encoder architectures . Generate and utilize text/image embeddings (e.g., CLIP , BERT , Sentence Transformers ) for content-based recommendations. Design semantic similarity search pipelines using vector databases (FAISS, ScaNN, Qdrant, Matching Engine). Create and manage scalable ML pipelines using Vertex AI , Kubeflow Pipelines , and GKE . Handle large-scale data preparation and feature engineering using Dataproc (PySpark) and Dataflow . Implement cold-start strategies leveraging metadata and multimodal embeddings. Work on user modeling , temporal personalization , and re-ranking strategies . Run A/B tests and interpret results to measure real-world impact. Collaborate with cross-functional teams (Engineering, Product, DevOps) for model deployment and monitoring. Must-Have Skills: Strong command of Python and ML libraries: pandas, polars, numpy, scikit-learn, matplotlib, tensorflow, torch, transformers. Deep understanding of modern recommender systems and embedding-based retrieval . Experience with TensorFlow , Keras , or PyTorch for building deep learning models. Hands-on with semantic search , ANN search , and real-time vector matching . Proven experience with Vertex AI , Kubeflow on GKE , and ML pipeline orchestration. Familiarity with vector DBs such as Qdrant , FAISS , ScaNN , or Matching Engine on GCP. Experience in deploying models via Vertex AI Online Prediction , TF Serving , or Cloud Run . Knowledge of feature stores , embedding versioning , and MLOps practices (CI/CD, monitoring). Preferred / Good to Have: Experience with ranking models (e.g., XGBoost , LightGBM , DLRM ) for candidate scoring. Exposure to LLM-powered personalization or hybrid retrieval systems. Familiarity with streaming pipelines using Pub/Sub , Dataflow , Cloud Functions . Hands-on with multi-modal retrieval (text + image + tabular data). Strong grasp of cold-start problem solving , using enriched metadata and embeddings. GCP Stack Youโ€™ll Work With: ML & Pipelines: Vertex AI, Vertex Pipelines, Kubeflow on GKE Embedding & Retrieval: Matching Engine, Qdrant, FAISS, ScaNN, Milvus Processing: Dataproc (PySpark), Dataflow Ingestion & Serving: Pub/Sub, Cloud Functions, Cloud Run, TF Serving CI/CD & Automation: GitHub Actions, GitLab CI, Terraform Show more Show less

Posted 19 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Some careers shine brighter than others. If youโ€™re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Senior Software Engineer In this role, you will: Provide the technical expertise for Risk Data Platform and the various software components that supplement it (on-prem & cloud) Implement standards around development, DevSecOps and review code, pull requests Act as a technical expert on the design and implementation of the technology solutions to meet the needs of the Data & Enterprise reporting function on a tactical and strategic basis Accountable for ensuring compliance of the products and services with mandatory and regulatory requirements, control objectives in the risk and control framework and technical currency (in line with published standards and guidelines) and, with the architecture function, implementation of the business imperatives. The role holder must work with the IT communities of practice to maximize automation, increase efficiency and ensure that best practice, and the latest tools, techniques and processes have been adopted Requirements To be successful in this role, you should meet the following requirements: Must have experience in CI/CD - Ansible / Jenkins Experience with UNIX, Spark UI and batch framework Proficient understanding of code versioning tools Git. Strong unit test and debugging skill Proficient knowledge of integration of Spark framework & Deltalake Experience in using CI/CD automation tools (Git, Jenkins, Configuration deployment tools ( Puppet/Chef/Ansible) Expertise in Python/Pyspark Coding Soft skills Good Communication and coordination skills Self-motivated team player with demonstrated problem solving skills Lead the team in navigating customer requirements & design solutions Risk management skills Collaborative working style Business communication Constructive conflict resolution HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by โ€“ HSBC Software Development India Show more Show less

Posted 19 hours ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

Orion Innovation is a premier, award-winning, global business and technology services firm. Orion delivers game-changing business transformation and product development rooted in digital strategy, experience design, and engineering, with a unique combination of agility, scale, and maturity. We work with a wide range of clients across many industries including financial services, professional services, telecommunications and media, consumer products, automotive, industrial automation, professional sports and entertainment, life sciences, ecommerce, and education. Responsibilities Build and maintain the infrastructure for data generation, collection, storage, and processing. Design, build, and maintain scalable data pipelines to support data flows from various sources to data warehouses and analytics platforms. Develop and manage ETL (Extract, Transform, Load) processes to ensure data is accurately transformed and loaded into the target systems. Design and optimize databases, ensuring performance, security, and scalability of data storage solutions. Integrate data from various internal and external sources into unified data systems for analysis. Work with big data technologies (e.g., Hadoop, Spark) to process and manage large volumes of structured and unstructured data. Implement and manage cloud-based data solutions using Azure & Fabric platforms. Ensure data quality by developing validation processes and monitoring for anomalies and inconsistencies. Work closely with data scientists, analysts, and other stakeholders to meet their data needs and ensure smooth data operations. Automate repetitive data processes and workflows to improve efficiency and reduce manual effort. Implement and enforce data security protocols, ensuring compliance with industry standards and regulations. Optimize data queries and system performance to handle large data sets efficiently. Create and maintain clear documentation of data pipelines, infrastructure, and processes for transparency and training. Set up monitoring tools to ensure data systems are functioning smoothly and troubleshoot any issues that arise. Stay updated with emerging trends and tools in data engineering and continuously improve data infrastructure. Qualifications Azure Solution Architect certification preferred Microsoft Fabric Analytics Engineer Associate certification preferred 5+ years of architecture experience in the technology Operations/Development using Azure technologies. Strong experience in Python and PySpark required Strong understanding and experience in building lake houses, data lakes and data warehouses Strong experience in Microsoft Fabric technologies. Good understanding of the Scrum Agile methodology Strong experience with Azure Cloud technologies Solid knowledge of SQL, and non-relational (NoSQL) databases Solid knowledge of networking, firewalls, load balancers etc. Exceptional communication skills and the ability to communicate appropriately with technical teams Familiarity with at least one of the following code build/deploy tools Azure DevOps or GitHub Actions Orion is an equal opportunity employer, and all qualified applicants will receive consideration for employment without regard to race, color, creed, religion, sex, sexual orientation, gender identity or expression, pregnancy, age, national origin, citizenship status, disability status, genetic information, protected veteran status, or any other characteristic protected by law. Candidate Privacy Policy Orion Systems Integrators, LLC And Its Subsidiaries And Its Affiliates (collectively, โ€œOrion,โ€ โ€œweโ€ Or โ€œusโ€) Are Committed To Protecting Your Privacy. This Candidate Privacy Policy (orioninc.com) (โ€œNoticeโ€) Explains What information we collect during our application and recruitment process and why we collect it; How we handle that information; and How to access and update that information. Your use of Orion services is governed by any applicable terms in this notice and our general Privacy Policy. Show more Show less

Posted 19 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Data Engineer Job Type: Full-Time Location: On-site Hyderabad, Telangana, India Job Summary: We are seeking an accomplished Data Engineer to join one of our top customer's dynamic team in Hyderabad. You will be instrumental in designing, implementing, and optimizing data pipelines that drive our business insights and analytics. If you are passionate about harnessing the power of big data, possess a strong technical skill set, and thrive in a collaborative environment, we would love to hear from you. Key Responsibilities: Develop and maintain scalable data pipelines using Python, PySpark, and SQL. Implement robust data warehousing and data lake architectures. Leverage the Databricks platform to enhance data processing and analytics capabilities. Model, design, and optimize complex database schemas. Collaborate with cross-functional teams to understand data requirements and deliver actionable insights. Lead and mentor junior data engineers and establish best practices. Troubleshoot and resolve data processing issues promptly. Required Skills and Qualifications: Strong proficiency in Python and PySpark. Extensive experience with the Databricks platform. Advanced SQL and data modeling skills. Demonstrated experience in data warehousing and data lake architectures. Exceptional problem-solving and analytical skills. Strong written and verbal communication skills. Preferred Qualifications: Experience with graph databases, particularly MarkLogic. Proven track record of leading data engineering teams. Understanding of data governance and best practices in data management. Show more Show less

Posted 20 hours ago

Apply

5.0 years

0 Lacs

India

On-site

Linkedin logo

THIS IS A LONG TERM CONTRACT POSITION WITH ONE OF THE LARGEST, GLOBAL, TECHNOLOGY LEADER . Our large, Fortune client is ranked as one of the best companies to work with, in the world. The client fosters progressive culture, creativity, and a Flexible work environment. They use cutting-edge technologies to keep themselves ahead of the curve. Diversity in all aspects is respected. Integrity, experience, honesty, people, humanity, and passion for excellence are some other adjectives that define this global technology leader. Key Responsibilities: Design and maintain robust and scalable data pipeline architecture . Assemble complex data sets that meet both functional and non-functional requirements. Implement internal process improvements, such as automation of manual tasks, optimization of data delivery, and re-designing infrastructure for scale. Build infrastructure to support efficient data extraction, transformation, and loading (ETL) using SQL, dbt , and AWS Big Data technologies . Develop analytics tools to provide actionable insights on employee experience , operational efficiency , and other business performance metrics . Collaborate with stakeholders to resolve data-related issues and support infrastructure needs. Create and manage processes for data transformation, metadata management , and workload orchestration . Stay up to date with emerging cloud technologies (AWS/Azure) and propose opportunities for integration. Partner with Data Scientists and Analysts to enhance data systems and ensure maximum usability. Minimum Qualifications: Bachelor's or Graduate degree in Computer Science , Information Systems , Statistics , or related quantitative field. 5+ years of experience in a Data Engineering role. Extensive hands-on experience with Snowflake and dbt (including advanced concepts like macros and Jinja templating). Proficient in SQL and familiar with various relational databases. Experience in Python and big data frameworks like PySpark . Hands-on experience with AWS services such as S3 , EC2 , Glue , Lambda , RDS , and Redshift . Experience working with APIs for data ingestion and integration. Proven track record in optimizing data pipelines and solving performance issues. Strong analytical and problem-solving skills, with experience conducting root cause analysis . Preferred Qualifications: Experience with AWS CloudFormation templates . Familiarity with Agile/SCRUM methodologies. Exposure to Power BI for dashboard development. Experience working with unstructured datasets and deriving business value from them. Previous experience with globally distributed teams in agile environments. Primary Skills (Must-have): Data Engineering-Data Quality-Data Analysis Show more Show less

Posted 20 hours ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Required Skills: YOE-8+ Mode Of work: Remote Design, develop, modify, and test software applications for the healthcare industry in agile environment. Duties include: Develop. support/maintain and deploy software to support a variety of business needs Provide technical leadership in the design, development, testing, deployment and maintenance of software solutions Design and implement platform and application security for applications Perform advanced query analysis and performance troubleshooting Coordinate with senior-level stakeholders to ensure the development of innovative software solutions to complex technical and creative issues Re-design software applications to improve maintenance cost, testing functionality, platform independence and performance Manage user stories and project commitments in an agile framework to rapidly deliver value to customers deploy and operate software solutions using DevOps model. Required skills: Azure Deltalake, ADF, Databricks, PySpark, Oozie, Airflow, Big Data technologies( HBASE, HIVE), CI/CD (GitHub/Jenkins) Show more Show less

Posted 20 hours ago

Apply

7.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Greetings from TCS! TCS is hiring for Python/Data Engineer Desired Experience Range: 7 + Years Job Location: PAN India Below are the responsibilities a Data Engineer is expected to assume: Data engineers should be comfortable with Python/Pyspark for scripting, data manipulation and automation Understand the data needs of the company or client Collaborate with the development team to design and build the database model Engage the development team to implement the database A deep understanding of relational databases (e.g., MySQL, PostgreSQL) and NoSQL Knowledge of NoSQL databases is essential, as they are often used to handle unstructured or semi-structured data. Proficiency in cloud platforms like AWS, Azure is necessary for data engineers. Determine the business needs for data reporting requirements Adjust access to the data and the reports as needed Work closely with the development team to implement data warehouse and reporting Understand the companyโ€™s data migration needs Work with the development team to implement the migration Work with data scientists to determine metadata querying requirements Help to implement the querying Help determine and manage data cleaning requirements Help determine data security needs and implement security solutions Thanks, Anshika Show more Show less

Posted 20 hours ago

Apply

5.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The role involves building and managing data pipelines, troubleshooting issues, and ensuring data accuracy across various platforms such as Azure Synapse Analytics, Azure Data Lake Gen2, and SQL environments. This position requires extensive SQL experience and a strong background in PySpark development. Responsibilities Data Engineering: Work with Azure Synapse Pipelines and PySpark for data transformation and pipeline management. Perform data integration and schema updates in Delta Lake environments, ensuring smooth data flow and accurate reporting. Work with our Azure DevOps team on CI/CD processes for deployment of Infrastructure as Code (IaC) and Workspace artifacts. Develop custom solutions for our customers defined by our Data Architect and assist in improving our data solution patterns over time. Documentation : Document ticket resolutions, testing protocols, and data validation processes. Collaborate with other stakeholders to provide specifications and quotations for enhancements requested by customers. Ticket Management: Monitor the Jira ticket queue and respond to tickets as they are raised. Understand ticket issues, utilizing extensive SQL, Synapse Analytics, and other tools to troubleshoot them. Communicate effectively with customer users who raised the tickets and collaborate with other teams (e.g., FinOps, Databricks) as needed to resolve issues. Troubleshooting and Support: Handle issues related to ETL pipeline failures, Delta Lake processing, or data inconsistencies in Synapse Analytics. Provide prompt resolution to data pipeline and validation issues, ensuring data integrity and performance. Desired Skills & Requirements Seeking a candidate with 5+ years of Dynamics 365 ecosystem experience with a strong PySpark development background. While various profiles may apply, we highly value a strong person-organization fit. Our ideal candidate possesses the following attributes and qualifications: Extensive experience with SQL, including query writing and troubleshooting in Azure SQL, Synapse Analytics, and Delta Lake environments. Strong understanding and experience in implementing and supporting ETL processes, Data Lakes, and data engineering solutions. Proficiency in using Azure Synapse Analytics, including workspace management, pipeline creation, and data flow management. Hands-on experience with PySpark for data processing and automation. Ability to use VPNs, MFA, RDP, jump boxes/jump hosts, etc., to operate within the customers secure environments. Some experience with Azure DevOps CI/CD IaC and release pipelines. Ability to communicate effectively both verbally and in writing, with strong problem-solving and analytical skills. Understanding of the operation and underlying data structure of D365 Finance and Operations, Business Central, and Customer Engagement. Experience with Data Engineering in Microsoft Fabric Experience with Delta Lake and Azure data engineering concepts (e.g., ADLS, ADF, Synapse, AAD, Databricks). Certifications in Azure Data Engineering. Why Join Us? Opportunity to work with innovative technologies in a dynamic environment where progressive work culture with a global perspective where your ideas truly matter, and growth opportunities are endless. Work with the latest Microsoft Technologies alongside Dynamics professionals committed to driving customer success. Enjoy the flexibility to work from anywhere Work-life balance that suits your lifestyle. Competitive salary and comprehensive benefits package. Career growth and professional development opportunities. A collaborative and inclusive work culture. Show more Show less

Posted 20 hours ago

Apply

5.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

We are looking for an experienced Azure Data Warehouse Engineer to join our Chennai-based data and analytics team. You will be responsible for designing and developing a unified data platform that integrates critical business systems and enables self-service analytics across departments. This is a strategic role aimed at building a single source of truth for customer and transaction data across the organization. Key Goals of the Project: Build a unified data warehouse on Azure , integrating data from Salesforce and QuickBooks Create department-specific flat views for business reporting Enable self-service dashboards using Tableau Deliver a centralized, accurate, and reliable data source for customer and transaction insights Proposed Technology Stack: Cloud Platform: Azure (existing environment) Data Warehouse: Azure Synapse or Snowflake (to be finalized) ETL / Orchestration: Azure Data Factory, Python, Spark Reporting Tools: Tableau Key Responsibilities: Design and implement scalable data models and architecture on Azure Synapse or Snowflake Develop ETL pipelines using Azure Data Factory, Python , and Spark to ingest and transform data from Salesforce , QuickBooks , and other sources Create robust, reusable data views for different business departments Collaborate with business analysts and stakeholders to deliver reliable datasets for Tableau dashboards Ensure data accuracy, consistency, security, and governance across the platform Optimize performance of large-scale data processing jobs Maintain documentation, data catalogs, and version control for data pipelines Qualifications & Skills: 5+ years of experience in data warehousing, ETL development, and cloud-based analytics Strong expertise in Azure Data Factory , Azure Synapse , or Snowflake Experience with Salesforce and QuickBooks data integration is highly desirable Proficiency in SQL , Python , and distributed data processing (e.g., PySpark ) Hands-on experience with Tableau or similar BI tools Understanding of data modeling (dimensional/star schema), warehousing best practices, and performance tuning Familiarity with cloud security, access management, and data governance in Azure Excellent problem-solving, communication, and collaboration skills Nice to Have: Experience with DevOps and CI/CD practices in a data engineering context Familiarity with Data Vault or modern data stack architectures Knowledge of API integrations and data sync between cloud systems Show more Show less

Posted 20 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Title: Backend Developer - Python Job Type: Full-time Location: On-site, Hyderabad, Telangana, India Job Summary: Join one of our top customer's team as a Backend Developer and help drive scalable, high-performance solutions at the intersection of machine learning and data engineering. Youโ€™ll collaborate with skilled professionals to design, implement, and maintain backend systems powering advanced AI/ML applications in a dynamic, onsite environment. Key Responsibilities: Develop, test, and deploy robust backend components and microservices using Python and PySpark. Implement and optimize data pipelines leveraging Databricks and distributed computing frameworks. Design and maintain efficient databases with MySQL, ensuring data integrity and high availability. Integrate machine learning models into production-ready backend systems supporting AI-driven features. Collaborate closely with data scientists and engineers to deliver end-to-end solutions aligned with business goals. Monitor, troubleshoot, and enhance system performance, utilizing Redis for caching and improved scalability. Write clear and maintainable documentation, and communicate effectively with team members both verbally and in writing. Required Skills and Qualifications: Proficiency in Python programming for backend development. Hands-on experience with Databricks and PySpark in a production environment. Strong understanding of MySQL database design, querying, and performance tuning. Practical background in machine learning concepts and deploying ML models. Experience with Redis for caching and state management. Excellent written and verbal communication skills, with a keen attention to detail. Demonstrated ability to work effectively in an on-site, collaborative setting in Hyderabad. Preferred Qualifications: Previous experience in high-growth AI/ML or data engineering projects. Familiarity with additional backend technologies or cloud platforms. Demonstrated leadership or mentorship in technical teams. Show more Show less

Posted 21 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Leads data engineering activities on moderate to complex data and analytics-centric problems which have broad impact and require in-depth analysis to obtain desired results; assemble, enhance, maintain, and optimize current, enable cost savings and meet individual project or enterprise maturity objectives. advanced working knowledge of SQL, Python, and PySpark Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, CodePipeline Experience with platform monitoring and alerts tools Work closely with Subject Matter Experts (SMEs) to design and develop Foundry front end applications with the ontology (data model) and data pipelines supporting the applications Implement data transformations to derive new datasets or create Foundry Ontology Objects necessary for business applications Implement operational applications using Foundry Tools (Workshop, Map, and/or Slate) Actively participate in agile/scrum ceremonies (stand ups, planning, retrospectives, etc.) Create and maintain documentation describing data catalog and data objects Maintain applications as usage grows and requirements change Promote a continuous improvement mindset by engaging in after action reviews and sharing learnings Use communication skills, especially for explaining technical concepts to nontechnical business leader Show more Show less

Posted 21 hours ago

Apply

4.0 years

0 Lacs

India

On-site

Linkedin logo

Job Title: Azure Databricks Engineer Experience: 4+ Years Required Skills: 4+ years of experience in Data Engineering . Strong hands-on experience with Azure Databricks and PySpark . Good understanding of Azure Data Factory (ADF) , Azure Data Lake (ADLS) , and Azure Synapse . Strong SQL skills and experience with large-scale data processing. Experience with version control systems (Git), CI/CD pipelines, and Agile methodology. Knowledge of Delta Lake, Lakehouse architecture, and distributed computing concepts. Preferred Skills: Experience with Airflow , Power BI , or machine learning pipelines . Familiarity with DevOps tools for automation and deployment in Azure. Azure certifications (e.g., DP-203) are a plus. Show more Show less

Posted 21 hours ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Title: Data Engineer Location: Remote Employment type: Full Time with BayOne Weโ€™re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What Weโ€™re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with โ€œletโ€™s get this doneโ€ approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less

Posted 21 hours ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title - S&C Global Network - AI - CDP - Marketing Analytics - Analyst Management Level: 11-Analyst Location: Bengaluru, BDC7C Must-have skills: Data Analytics Good to have skills: Ability to leverage design thinking, business process optimization, and stakeholder management skills. Job Summary: This role involves driving strategic initiatives, managing business transformations, and leveraging industry expertise to create value-driven solutions. Roles & Responsibilities: Provide strategic advisory services, conduct market research, and develop data-driven recommendations to enhance business performance. WHATโ€™S IN IT FOR YOU? As part of our Analytics practice, you will join a worldwide network of over 20k+ smart and driven colleagues experienced in leading AI/ML/Statistical tools, methods and applications. From data to analytics and insights to actions, our forward-thinking consultants provide analytically-informed, issue-based insights at scale to help our clients improve outcomes and achieve high performance. What You Would Do In This Role A Consultant/Manager for Customer Data Platforms serves as the day-to-day marketing technology point of contact and helps our clients get value out of their investment into a Customer Data Platform (CDP) by developing a strategic roadmap focused on personalized activation. You will be working with a multidisciplinary team of Solution Architects, Data Engineers, Data Scientists, and Digital Marketers. Key Duties and Responsibilities: Be a platform expert in one or more leading CDP solutions. Developer level expertise on Lytics, Segment, Adobe Experience Platform, Amperity, Tealium, Treasure Data etc. Including custom build CDPs Deep developer level expertise for real time even tracking for web analytics e.g., Google Tag Manager, Adobe Launch etc. Provide deep domain expertise in our clientโ€™s business and broad knowledge of digital marketing together with a Marketing Strategist industry Deep expert level knowledge of GA360/GA4, Adobe Analytics, Google Ads, DV360, Campaign Manager, Facebook Ads Manager, The Trading desk etc. Assess and audit the current state of a clientโ€™s marketing technology stack (MarTech) including data infrastructure, ad platforms and data security policies together with a solutions architect. Conduct stakeholder interviews and gather business requirements Translate business requirements into BRDs, CDP customer analytics use cases, structure technical solution Prioritize CDP use cases together with the client. Create a strategic CDP roadmap focused on data driven marketing activation. Work with the Solution Architect to strategize, architect, and document a scalable CDP implementation, tailored to the clientโ€™s needs. Provide hands-on support and platform training for our clients. Data processing, data engineer and data schema/models expertise for CDPs to work on data models, unification logic etc. Work with Business Analysts, Data Architects, Technical Architects, DBAs to achieve project objectives - delivery dates, quality objectives etc. Business intelligence expertise for insights, actionable recommendations. Project management expertise for sprint planning Professional & Technical Skills: Relevant experience in the required domain. Strong analytical, problem-solving, and communication skills. Ability to work in a fast-paced, dynamic environment. Strong understanding of data governance and compliance (i.e. PII, PHI, GDPR, CCPA) Experience with analytics tools like Google Analytics or Adobe Analytics is a plus. Experience with A/B testing tools is a plus. Must have programming experience in PySpark, Python, Shell Scripts. RDBMS, TSQL, NoSQL experience is must. Manage large volumes of structured and unstructured data, extract & clean data to make it amenable for analysis. Experience in deployment and operationalizing the code is an added advantage. Experience with source control systems such as Git, Bitbucket, and Jenkins build and continuous integration tools. Proficient in Excel, MS word, PowerPoint, etc Technical Skills: Any CDP platforms experience e.g., Lytics CDP platform developer, or/and Segment CDP platform developer, or/and Adobe Experience Platform (Real time โ€“ CDP) developer, or/and Custom CDP developer on any cloud GA4/GA360, or/and Adobe Analytics Google Tag Manager, and/or Adobe Launch, and/or any Tag Manager Tool Google Ads, DV360, Campaign Manager, Facebook Ads Manager, The Trading desk etc. Deep Cloud experiecne (GCP, AWS, Azure) Advance level Python, SQL, Shell Scripting experience Data Migration, DevOps, MLOps, Terraform Script programmer Soft Skills: Strong problem solving skills Good team player Attention to details Good communication skills Additional Information: Opportunity to work on innovative projects. Career growth and leadership exposure. About Our Company | Accenture Experience: 3-5Years Educational Qualification: Any Degree Show more Show less

Posted 21 hours ago

Apply

0.0 years

0 Lacs

Gurugram, Haryana

Remote

Indeed logo

Position: GCP Data Engineer Company Info: Prama (HQ : Chandler, AZ, USA) Prama specializes in AI-powered and Generative AI solutions for Data, Cloud, and APIs. We collaborate with businesses worldwide to develop platforms and AI-powered products that offer valuable insights and drive business growth. Our comprehensive services include architectural assessment, strategy development, and execution to create secure, reliable, and scalable systems. We are experts in creating innovative platforms for various industries. We help clients to overcome complex business challenges. Our team is dedicated to delivering cutting-edge solutions that elevate the digital experience for corporations. Prama is headquartered in Phoenix with offices in USA, Canada, Mexico, Brazil and India. Location: Bengaluru | Gurugram | Hybrid Benefits: 5 Day Working | Career Growth | Flexible working | Potential On-site Opportunity Kindly send your CV or Resume to careers@prama.ai Primary skills: GCP, PySpark, Python, SQL, ETL Job Description: We are seeking a highly skilled and motivated GCP Data Engineer to join our team. As a GCP Data Engineer, you will play a crucial role in designing, developing, and maintaining robust data pipelines and data warehousing solutions on the Google Cloud Platform (GCP). You will work closely with data analysts, data scientists, and other stakeholders to ensure the efficient collection, transformation, and analysis of large datasets. Responsibilities: ยท Design, develop, and maintain scalable data pipelines using GCP tools such as Dataflow, Dataproc, and Cloud Functions. ยท Implement ETL processes to extract, transform, and load data from various sources into BigQuery. ยท Optimize data pipelines for performance, cost-efficiency, and reliability. ยท Collaborate with data analysts and data scientists to understand their data needs and translate them into technical solutions. ยท Design and implement data warehouses and data marts using BigQuery. ยท Model and structure data for optimal performance and query efficiency. ยท Develop and maintain data quality checks and monitoring processes. ยท Use SQL and Python (PySpark) to analyze large datasets and generate insights. ยท Create visualizations using tools like Data Studio or Looker to communicate data findings effectively. ยท Manage and maintain GCP resources, including virtual machines, storage, and networking. ยท Implement best practices for security, cost optimization, and scalability. ยท Automate infrastructure provisioning and management using tools like Terraform. Qualifications: ยท Strong proficiency in SQL, Python, and PySpark. ยท Hands-on experience with GCP services, including BigQuery, Dataflow, Dataproc, Cloud Storage, and Cloud Functions. ยท Experience with data warehousing concepts and methodologies. ยท Understanding of data modeling techniques and best practices. ยท Strong analytical and problem-solving skills. ยท Excellent communication and collaboration skills. ยท Experience with data quality assurance and monitoring. ยท Knowledge of cloud security best practices. ยท A passion for data and a desire to learn new technologies. Preferred Qualifications: ยท Google Cloud Platform certification. ยท Experience with machine learning and AI. ยท Knowledge of data streaming technologies (Kafka, Pub/Sub). ยท Experience with data visualization tools (Looker, Tableau, Data Studio Job Type: Full-time Pay: โ‚น1,200,000.00 - โ‚น2,000,000.00 per year Benefits: Flexible schedule Health insurance Leave encashment Paid sick time Provident Fund Work from home Ability to commute/relocate: Gurugram, Haryana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): CTC Expected CTC Notice Period (days) Experience in GCP Total Experience Work Location: Hybrid remote in Gurugram, Haryana

Posted 22 hours ago

Apply

0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Job Description Job Title: Backend Developer Job Type: Full-time Location: On-site, Hyderabad, Telangana, India About us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: Join our customer's team as a Backend Developer and play a pivotal role in building high-impact backend solutions at the forefront of AI and data engineering. This is your chance to work in a collaborative, onsite environment where your technical expertise and communication skills will drive the success of next-generation AI/ML applications. Key Responsibilities: โ€ข Develop, test, and maintain scalable backend components and microservices using Python and PySpark. โ€ข Build and optimize advanced data pipelines leveraging Databricks and distributed computing platforms. โ€ข Design and administer efficient MySQL databases, focusing on data integrity, availability, and performance. โ€ข Integrate machine learning models into production-grade backend systems powering innovative AI features. โ€ข Collaborate with data scientists and engineering peers to deliver comprehensive, business-driven solutions. โ€ข Monitor, troubleshoot, and enhance system performance using Redis for caching and scalability. โ€ข Create clear technical documentation and communicate proactively with the team, emphasizing both written and verbal skills. Required Skills and Qualifications: โ€ข Proficient in Python for backend development with strong coding standards. โ€ข Practical experience with Databricks and PySpark in live production environments. โ€ข Advanced knowledge of MySQL database design, query optimization, and maintenance. โ€ข Solid foundation in machine learning concepts and deploying ML models in backend systems. โ€ข Experience utilizing Redis for effective caching and state management. โ€ข Outstanding written and verbal communication abilities with strong attention to detail. โ€ข Demonstrated success working collaboratively in a fast-paced onsite setting in Hyderabad. Preferred Qualifications: โ€ข Background in high-growth AI/ML or complex data engineering projects. โ€ข Familiarity with additional backend technologies or cloud-based platforms. โ€ข Experience mentoring or leading technical teams. Be a key contributor to our customer's team, delivering backend systems that seamlessly bridge data engineering and AI innovation. We value professionals who thrive on clear communication, technical excellence, and collaborative problem-solving. Show more Show less

Posted 22 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Job Title: Backend Developer Job Type: Full-time Location: On-site, Hyderabad, Telangana, India About us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: Join our customer's team as a Backend Developer and play a pivotal role in building high-impact backend solutions at the forefront of AI and data engineering. This is your chance to work in a collaborative, onsite environment where your technical expertise and communication skills will drive the success of next-generation AI/ML applications. Key Responsibilities: โ€ข Develop, test, and maintain scalable backend components and microservices using Python and PySpark. โ€ข Build and optimize advanced data pipelines leveraging Databricks and distributed computing platforms. โ€ข Design and administer efficient MySQL databases, focusing on data integrity, availability, and performance. โ€ข Integrate machine learning models into production-grade backend systems powering innovative AI features. โ€ข Collaborate with data scientists and engineering peers to deliver comprehensive, business-driven solutions. โ€ข Monitor, troubleshoot, and enhance system performance using Redis for caching and scalability. โ€ข Create clear technical documentation and communicate proactively with the team, emphasizing both written and verbal skills. Required Skills and Qualifications: โ€ข Proficient in Python for backend development with strong coding standards. โ€ข Practical experience with Databricks and PySpark in live production environments. โ€ข Advanced knowledge of MySQL database design, query optimization, and maintenance. โ€ข Solid foundation in machine learning concepts and deploying ML models in backend systems. โ€ข Experience utilizing Redis for effective caching and state management. โ€ข Outstanding written and verbal communication abilities with strong attention to detail. โ€ข Demonstrated success working collaboratively in a fast-paced onsite setting in Hyderabad. Preferred Qualifications: โ€ข Background in high-growth AI/ML or complex data engineering projects. โ€ข Familiarity with additional backend technologies or cloud-based platforms. โ€ข Experience mentoring or leading technical teams. Be a key contributor to our customer's team, delivering backend systems that seamlessly bridge data engineering and AI innovation. We value professionals who thrive on clear communication, technical excellence, and collaborative problem-solving. Show more Show less

Posted 22 hours ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Role : Palantir Tech Lead Location : Hyderabad( 5 days work from office) contract duration: 12+ months Skills: Python, Pyspark and Palantir Tasks and Responsibilities: Leads data engineering activities on moderate to complex data and analytics-centric problems which have broad impact and require in-depth analysis to obtain desired results; assemble, enhance, maintain, and optimize current, enable cost savings and meet individual project or enterprise maturity objectives. advanced working knowledge of SQL, Python, and PySpark Experience using tools such as: Git/Bitbucket, Jenkins/CodeBuild, CodePipeline Experience with platform monitoring and alerts tools Work closely with Subject Matter Experts (SMEs) to design and develop Foundry front end applications with the ontology (data model) and data pipelines supporting the applications Implement data transformations to derive new datasets or create Foundry Ontology Objects necessary for business applications Implement operational applications using Foundry Tools (Workshop, Map, and/or Slate) Actively participate in agile/scrum ceremonies (stand ups, planning, retrospectives, etc.) Create and maintain documentation describing data catalog and data objects Maintain applications as usage grows and requirements change Promote a continuous improvement mindset by engaging in after action reviews and sharing learnings Use communication skills, especially for explaining technical concepts to nontechnical business leader Show more Show less

Posted 22 hours ago

Apply

10.0 years

0 Lacs

Kerala, India

On-site

Linkedin logo

Senior Data Engineer โ€“ AWS Expert (Lead/Associate Architect Level) ๐Ÿ“ Location: Trivandrum or Kochi (On-site/Hybrid) Experience:10+ Years (Relevant exp in AWS- 5+ is mandatory) About the Role Weโ€™re hiring a Senior Data Engineer with deep expertise in AWS services , strong hands-on experience in data ingestion, quality, and API development , and the leadership skills to operate at a Lead or Associate Architect level . This role demands a high level of technical ownership , especially in architecting scalable, reliable data pipelines and robust API integrations. Youโ€™ll collaborate with cross-functional teams across geographies, so a willingness to work night shifts overlapping with US hours (till 10 AM IST) is essential. Key Responsibilities Data Engineering Leadership : Design and implement scalable, end-to-end data ingestion and processing frameworks using AWS. AWS Architecture : Hands-on development using AWS Glue, Lambda, EMR, Step Functions, S3, ECS , and other AWS services. Data Quality & Validation : Build automated checks, validation layers, and monitoring for ensuring data accuracy and integrity. API Development : Develop secure, high-performance REST APIs for internal and external data integration. Collaboration : Work closely with product, analytics, and DevOps teams across geographies. Participate in Agile ceremonies and CI/CD pipelines using tools like GitLab. What Weโ€™re Looking For Experience : 5+ years in Data Engineering, with a proven track record in designing scalable AWS-based data systems. Technical Mastery : Proficient in Python/PySpark, SQL, and building big data pipelines. AWS Expert : Deep knowledge of core AWS services used for data ingestion and processing. API Expertise : Experience designing and managing scalable APIs. Leadership Qualities : Ability to work independently, lead discussions, and drive technical decisions. Preferred Qualifications Experience with Kinesis, Firehose, SQS , and data lakehouse architectures . Exposure to tools like Apache Iceberg , Aurora , Redshift , and DynamoDB . Prior experience in distributed, multi-cluster environments. Working Hours US Time Zone Overlap Required : Must be available to work night shifts overlapping with US hours (up to 10:00 AM IST). Work Location Trivandrum or Kochi โ€“ On-site or hybrid options available for the right candidate. Show more Show less

Posted 22 hours ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Description Job Title: Backend Developer Job Type: Full-time Location: On-site, Hyderabad, Telangana, India About us: Our mission at micro1 is to match the most talented people in the world with their dream jobs. If you are looking to be at the forefront of AI innovation and work with some of the fastest growing companies in Silicon Valley, we invite you to apply for a role. By joining the micro1 community, your resume will become visible to top industry leaders, unlocking access to the best career opportunities on the market. Job Summary: Join our customer's team as a Backend Developer and play a pivotal role in building high-impact backend solutions at the forefront of AI and data engineering. This is your chance to work in a collaborative, onsite environment where your technical expertise and communication skills will drive the success of next-generation AI/ML applications. Key Responsibilities: โ€ข Develop, test, and maintain scalable backend components and microservices using Python and PySpark. โ€ข Build and optimize advanced data pipelines leveraging Databricks and distributed computing platforms. โ€ข Design and administer efficient MySQL databases, focusing on data integrity, availability, and performance. โ€ข Integrate machine learning models into production-grade backend systems powering innovative AI features. โ€ข Collaborate with data scientists and engineering peers to deliver comprehensive, business-driven solutions. โ€ข Monitor, troubleshoot, and enhance system performance using Redis for caching and scalability. โ€ข Create clear technical documentation and communicate proactively with the team, emphasizing both written and verbal skills. Required Skills and Qualifications: โ€ข Proficient in Python for backend development with strong coding standards. โ€ข Practical experience with Databricks and PySpark in live production environments. โ€ข Advanced knowledge of MySQL database design, query optimization, and maintenance. โ€ข Solid foundation in machine learning concepts and deploying ML models in backend systems. โ€ข Experience utilizing Redis for effective caching and state management. โ€ข Outstanding written and verbal communication abilities with strong attention to detail. โ€ข Demonstrated success working collaboratively in a fast-paced onsite setting in Hyderabad. Preferred Qualifications: โ€ข Background in high-growth AI/ML or complex data engineering projects. โ€ข Familiarity with additional backend technologies or cloud-based platforms. โ€ข Experience mentoring or leading technical teams. Be a key contributor to our customer's team, delivering backend systems that seamlessly bridge data engineering and AI innovation. We value professionals who thrive on clear communication, technical excellence, and collaborative problem-solving. Show more Show less

Posted 22 hours ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Current scope and span of work: Summary : Need is for a data engineer to handle day-to-day activities involving data ingestion from multiple source locations, help identify data sources, to troubleshoot issues, and engage with a third-party vendor to meet stakeholdersโ€™ needs. Required Skills : Python Processing of large quantities of text documents Extraction of text from Office and PDF documents Input json to an API, output json to an API Nifi (or similar technology compatible with current EMIT practices) Basic understanding of AI/ML concepts Database/Search engine/SOLR skills SQL โ€“ build queries to analyze, create and update databases Understands the basics of hybrid search Experience working with terabytes (TB) of data Basic OpenML/Python/Azure knowledge Scripting knowledge/experience in an Azure environment to automate Cloud systems experience related to search and databases Platforms: DataBricks Snowflake ESRI ArcGIS / SDE New GenAI app being developed Scope of work : 1. Ingest TB of data from multiple sources identified by the Ingestion Lead 2. Optimize data pipelines to improve on data processing, speed, and data availability 4. Make data available for end users from several hundred LAN and SharePoint areas 5. Monitor data pipelines daily and fix issues related to scripts, platforms, and ingestion 6. Work closely with the Ingestion Lead & Vendor on issues related to data ingestion Technical Skills demonstrated: 1. SOLR - Backend database 2. Nifi - Data movement 3. Pyspark - Data Processing 4. Hive & Oozie - For jobs monitoring 5. Querying - SQL, HQl and SOLR querying 6. SQL 7. Python Behavioral Skills demonstrated: 1. Excellent communication skills 2. Ability to receive direction from a Lead and implement 3. Prior experience working in an Agile setup, preferred 4. Experience troubleshooting technical issues and quality control checking of work 5. Experience working with a globally distributed team in different Show more Show less

Posted 22 hours ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) ๐Ÿ”ง Primary Skills Python Spark (PySpark) SQL Delta Lake ๐Ÿ“Œ Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) ๐Ÿ”ง Primary Skills Python Spark (PySpark) SQL Delta Lake ๐Ÿ“Œ Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Contract Duration: 6 Months Experience Required: 8+ years (Overall), 5+ years (Relevant) ๐Ÿ”ง Primary Skills Python Spark (PySpark) SQL Delta Lake ๐Ÿ“Œ Key Responsibilities & Skills Strong understanding of Spark core: RDDs, DataFrames, DataSets, SparkSQL, Spark Streaming Proficient in Delta Lake features: time travel, schema evolution, data partitioning Experience designing and building data pipelines using Spark and Delta Lake Solid experience in Python/Scala/Java for Spark development Knowledge of data ingestion from files, APIs, and databases Familiarity with data validation and quality best practices Working knowledge of data warehouse concepts and data modeling Hands-on with Git for code versioning Exposure to CI/CD pipelines and containerization tools Nice to have: experience in ETL tools like DataStage, Prophecy, Informatica, or Ab Initio Show more Show less

Posted 1 day ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies