Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
10.0 - 12.0 years
11 - 15 Lacs
Hyderabad
Work from Office
Job Information Job Opening ID ZR_2063_JOB Date Opened 17/11/2023 Industry Technology Job Type Work Experience 10-12 years Job Title Azure Data Architect City Hyderabad Province Telangana Country India Postal Code 500003 Number of Positions 4 LocationCoimbatore & Hyderabad : Key-Azure+ SQL+ ADF+ Databricks +design+ Architecture( Mandate) Total experience in data management area for 10 + years with Azure cloud data platform experience Architect with Azure stack (ADLS, AALS, Azure Data Bricks, Azure Streaming Analytics Azure Data Factory, cosmos DB & Azure synapse) & mandatory expertise on Azure streaming Analytics, Data Bricks, Azure synapse, Azure cosmos DB Must have worked experience in large Azure Data platform and dealt with high volume Azure streaming Analytics Experience in designing cloud data platform architecture, designing large scale environments 5 plus Years of experience architecting and building Cloud Data Lake, specifically Azure Data Analytics technologies and architecture is desired, Enterprise Analytics Solutions, and optimising real time 'big data' data pipelines, architectures and data sets. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 month ago
8.0 - 12.0 years
3 - 6 Lacs
Bengaluru
Work from Office
Job Information Job Opening ID ZR_2385_JOB Date Opened 23/10/2024 Industry IT Services Job Type Work Experience 8-12 years Job Title Data modeller City Bangalore South Province Karnataka Country India Postal Code 560066 Number of Positions 1 Locations - Pune/Bangalore/Hyderabad/Indore Contract duration- 6 months Responsibilities Be responsible for the development of the conceptual, logical, and physical data models, the implementation of RDBMS, operational data store (ODS), data marts, and data lakes on target platforms. Implement business and IT data requirements through new data strategies and designs across all data platforms (relational & dimensional - MUST and NoSQL-optional) and data tools (reporting, visualization, analytics, and machine learning). Work with business and application/solution teams to implement data strategies, build data flows, and develop conceptual/logical/physical data models Define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models. Identify the architecture, infrastructure, and interfaces to data sources, tools supporting automated data loads, security concerns, analytic models, and data visualization. Hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. Work proactively and independently to address project requirements and articulate issues/challenges to reduce project delivery risks. Must have Payments Background Skills Hands-on relational, dimensional, and/or analytic experience (using RDBMS, dimensional, NoSQL data platform technologies, and ETL and data ingestion protocols). Experience with data warehouse, data lake, and enterprise big data platforms in multi-data-center contexts required. Good knowledge of metadata management, data modeling, and related tools (Erwin or ER Studio or others) required. Experience in team management, communication, and presentation. Experience with Erwin, Visio or any other relevant tool. check(event) ; career-website-detail-template-2 => apply(record.id,meta)" mousedown="lyte-button => check(event)" final-style="background-color:#2B39C2;border-color:#2B39C2;color:white;" final-class="lyte-button lyteBackgroundColorBtn lyteSuccess" lyte-rendered=""> I'm interested
Posted 1 month ago
5.0 - 7.0 years
15 - 25 Lacs
Mumbai, Delhi / NCR, Bengaluru
Work from Office
About the Role: We are seeking a skilled and experienced Data Engineer to join our remote team. The ideal candidate will have 5-7 years of professional experience working with Python, PySpark, SQL, and Spark SQL, and will play a key role in building scalable data pipelines, optimizing data workflows, and supporting data-driven decision-making across the organization. Key Responsibilities: Design, build, and maintain scalable and efficient data pipelines using PySpark and SQL. Develop and optimize Spark jobs for large-scale data processing. Collaborate with data scientists, analysts, and other engineers to ensure data quality and accessibility. Implement data integration from multiple sources into a unified data warehouse or lake. Monitor and troubleshoot data pipelines and ETL jobs for performance and reliability. Ensure best practices in data governance, security, and compliance. Create and maintain technical documentation related to data pipelines and infrastructure. Location-Delhi NCR,Bangalore,Chennai,Pune,Kolkata,Ahmedabad,Mumbai,Hyderabad,Remote
Posted 1 month ago
7.0 - 12.0 years
20 - 35 Lacs
Pune, Bengaluru
Hybrid
Hi All , we have senior position for databricks expert Job Location :Pune and Bangalore(hybrid) Perks :pick and drop provided Role & responsibilities Kindly Note : Overall experience should be 7 Yrs+ and immediate joiner Data Engineering - Data pipeline development using Azure Databricks 5+ years • Optimizing data processing performance, efficient resource utilization and execution time. Workflow orchestration 5+ years • Databricks features like Databricks SQL, Delta Lake, and Workflows to orchestrate and manage complex data workflows – 5 + years • Data modelling – 5 + Years 6. Nice to Haves: Knowledge of PySparks, Good knowledge of data warehousing
Posted 1 month ago
10.0 - 18.0 years
22 - 27 Lacs
Hyderabad
Remote
Role: Data Architect / Data Modeler - ETL, Snowflake, DBT Location: Remote Duration: 14+ Months Timings: 5:30pm IST to 1:30am IST Note: Looking for Immediate Joiners Job Summary: We are seeking a seasoned Data Architect / Modeler with deep expertise in Snowflake , DBT , and modern data architectures including Data Lake , Lakehouse , and Databricks platforms. The ideal candidate will be responsible for designing scalable, performant, and reliable data models and architectures that support analytics, reporting, and machine learning needs across the organization. Key Responsibilities: Architect and design data solutions using Snowflake , Databricks , and cloud-native lakehouse principles . Lead the implementation of data modeling best practices (star/snowflake schemas, dimensional models) using DBT . Build and maintain robust ETL/ELT pipelines supporting both batch and real-time data processing. Develop data governance and metadata management strategies to ensure high data quality and compliance. Define data architecture frameworks, standards, and principles for enterprise-wide adoption. Work closely with business stakeholders, data engineers, analysts, and platform teams to translate business needs into scalable data solutions. Provide guidance on data lake and data warehouse integration , helping bridge structured and unstructured data needs. Establish data lineage, documentation, and maintain architecture diagrams and data dictionaries. Stay up to date with industry trends and emerging technologies in cloud data platforms and recommend improvements. Required Skills & Qualifications: 10+ years of experience in data architecture, data engineering, or data modeling roles. Strong experience with Snowflake including performance tuning, security, and architecture. Hands-on experience with DBT (Data Build Tool) for building and maintaining data transformation workflows. Deep understanding of Lakehouse Architecture , Data Lake implementations, and Databricks . Solid grasp of dimensional modeling , normalization/denormalization strategies, and data warehouse design principles. Experience with cloud platforms (e.g., AWS, Azure, or GCP) Proficiency in SQL and scripting languages (e.g., Python). Familiarity with data governance frameworks , data catalogs, and metadata management tools.
Posted 1 month ago
5.0 - 8.0 years
7 - 10 Lacs
Bengaluru
Work from Office
Skill required: Delivery - Marketing Analytics and Reporting Designation: I&F Decision Sci Practitioner Sr Analyst Qualifications: Any Graduation Years of Experience: 5 to 8 years What would you do? Data & AIAnalytical processes and technologies applied to marketing-related data to help businesses understand and deliver relevant experiences for their audiences, understand their competition, measure and optimize marketing campaigns, and optimize their return on investment. What are we looking for? Data Analytics - with a specialization in the marketing domain*Domain Specific skills* Familiarity with ad tech and B2B sales*Technical Skills* Proficiency in SQL and Python Experience in efficiently building, publishing & maintaining robust data models & warehouses for self-ser querying, advanced data science & ML analytic purposes Experience in conducting ETL / ELT with very large and complicated datasets and handling DAG data dependencies. Strong proficiency with SQL dialects on distributed or data lake style systems (Presto, BigQuery, Spark/Hive SQL, etc.), including SQL-based experience in nested data structure manipulation, windowing functions, query optimization, data partitioning techniques, etc. Knowledge of Google BigQuery optimization is a plus. Experience in schema design and data modeling strategies (e.g. dimensional modeling, data vault, etc.) Significant experience with dbt (or similar tools), Spark-based (or similar) data pipelines General knowledge of Jinja templating in Python. Hands-on experience with cloud provider integration and automation via CLIs and APIs*Soft Skills* Ability to work well in a team Agility for quick learning Written and verbal communication Roles and Responsibilities: In this role you are required to do analysis and solving of increasingly complex problems Your day-to-day interactions are with peers within Accenture You are likely to have some interaction with clients and/or Accenture management You will be given minimal instruction on daily work/tasks and a moderate level of instruction on new assignments Decisions that are made by you impact your own work and may impact the work of others In this role you would be an individual contributor and/or oversee a small work effort and/or team Please note that this role may require you to work in rotational shifts Qualifications Any Graduation
Posted 1 month ago
3.0 - 8.0 years
20 - 32 Lacs
Bengaluru
Work from Office
Translate ideas&designs into running code Automate business processes using Office 365 Power Automate,Power Apps,Power BI Perform softwaredesign,debugging,testing,deployment Implement custom solutions leveragingCanvasApps,Model-Driven Apps,Office 365 Required Candidate profile production-level app development exp using PowerApps,Power Automate,Power BI Exp in C#, JavaScript, jQuery, Bootstrap, HTML Exp in SAP HANA,ETLprocesses,data modeling,data cleaning,data pre-processing
Posted 1 month ago
3.0 - 6.0 years
5 - 15 Lacs
Kochi, Thiruvananthapuram
Hybrid
Hiring for Azure Data Engineer in Kochi Location Experience - 3 to 6 years Location - Kochi JD Overall 3+ years of IT experience with 2+ Relevant experience in Azure Data Factory (ADF) and good hands-on with Exposure to latest ADF Version Hands-on experience on Azure functions & Azure synapse (Formerly SQL Data Warehouse) Should have project experience in Azure Data Lake / Blob (Storage purpose) Should have basic understanding on Batch Account configuration, various control options Sound knowledge in Data Bricks & Logic Apps Should be able to coordinate independently with business stake holders and understand the business requirements, implement the requirements using ADF Interested candidates please share your updated resume with below details at Smita.Dattu.Sarwade@gds.ey.com Total Experience - Relevant Experience - Current Location - Preferred Location - Current Ctc Expected Ctc – Notice period -
Posted 1 month ago
6.0 - 9.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Overview We are looking for a experienced GCP BigQuery Lead to architect, develop, and optimize data solutions on Google Cloud Platform, with a strong focus on Big Query . role involves leading warehouse setup initiatives, collaborating with stakeholders, and ensuring scalable, secure, and high-performance data infrastructure. Responsibilities Lead the design and implementation of data pipelines using BigQuery , Datorama , Dataflow , and other GCP services. Architect and optimize data models and schemas to support analytics, reporting use cases. Implement best practices for performance tuning , partitioning , and cost optimization in BigQuery. Collaborate with business stakeholders to translate requirements into scalable data solutions. Ensure data quality, governance, and security across all big query data assets. Automate workflows using orchestration tools. Mentor junior resource and lead script reviews, documentation, and knowledge sharing. Qualifications 6+ years of experience in data analytics, with 3+ years on GCP and BigQuery. Strong proficiency in SQL , with experience in writing complex queries and optimizing performance. Hands-on experience with ETL/ELT tools and frameworks. Deep understanding of data warehousing , dimensional modeling , and data lake architectures . Good Exposure with data governance , lineage , and metadata management . GCP data engineer certification is a plus. Experience with BI tools (e.g., Looker, Power BI). Good communication and team lead skills.
Posted 1 month ago
18.0 - 23.0 years
15 - 19 Lacs
Hyderabad
Work from Office
About the Role We are seeking a highly skilled and experienced Data Architect to join our team. The ideal candidate will have at least 18 years of experience in Data engineering and Analytics and a proven track record of designing and implementing complex data solutions. As a senior principal data architect, you will be expected to design, create, deploy, and manage Blackbaud’s data architecture. This role has considerable technical influence within the Data Platform, Data Engineering teams, and the Data Intelligence Center of Excellence at Blackbaud. This individual acts as an evangelist for proper data strategy with other teams at Blackbaud and assists with the technical direction, specifically with data, of other projects. What you'll do Develop and direct the strategy for all aspects of Blackbaud’s Data and Analytics platforms, products and services Set, communicate and facilitate technical direction more broadly for the AI Center of Excellence and collaboratively beyond the Center of Excellence Design and develop breakthrough products, services or technological advancements in the Data Intelligence space that expand our business Work alongside product management to craft technical solutions to solve customer business problems. Own the technical data governance practices and ensures data sovereignty, privacy, security and regulatory compliance. Continuously challenging the status quo of how things have been done in the past. Build data access strategy to securely democratize data and enable research, modelling, machine learning and artificial intelligence work. Help define the tools and pipeline patterns our engineers and data engineers use to transform data and support our analytics practice Work in a cross-functional team to translate business needs into data architecture solutions. Ensure data solutions are built for performance, scalability, and reliability. Mentor junior data architects and team members. Keep current on technologydistributed computing, big data concepts and architecture. Promote internally how data within Blackbaud can help change the world. What you'll bring 18+ years of experience in data and advanced analytics At least 8 years of experience working on data technologies in Azure/AWS Expertise in SQL and Python Expertise in SQL Server, Azure Data Services, and other Microsoft data technologies. Expertise in Databricks, Microsoft Fabric Strong understanding of data modeling, data warehousing, data lakes, data mesh and data products. Experience with machine learning Excellent communication and leadership skills. Preferred Qualifications Experience working with .Net/Java and Microservice Architecture Stay up to date on everything Blackbaud, follow us on Linkedin, X, Instagram, Facebook and YouTube Blackbaud is a digital-first company which embraces a flexible remote or hybrid work culture. Blackbaud supports hiring and career development for all roles from the location you are in today! Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.
Posted 1 month ago
4.0 - 9.0 years
5 - 15 Lacs
Hyderabad, Chennai
Work from Office
Key skills: Python, SQL, Pyspark, Databricks, AWS (Manadate) Added advantage: Life sciences/Pharma Roles and Responsibilities 1.Data Pipeline Development: Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. 2.Data Integration and Transformation: Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). 3.Performance Optimization: Optimize data processing and storage systems for cost efficiency and high performance, including managing compute resources and cluster configurations. 4.Automation and Workflow Management: Automate data workflows using tools like Airflow, Data bricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. 5.Data Quality and Validation: Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. 6.Cloud Platform Management: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. 7.Migration and Upgrades: Lead migrations from legacy data systems to modern cloud-based platforms, ensuring smooth transitions and enhanced scalability. 8.Cost Optimization: Implement strategies for reducing cloud infrastructure costs, such as optimizing resource usage, setting up lifecycle policies, and automating cost alerts. 9.Data Security and Compliance: Ensure secure access to data by implementing IAM roles and policies, adhering to data security best practices, and enforcing compliance with organizational standards. 10.Collaboration and Support: Work closely with data scientists, analysts, and business teams to understand data requirements and provide support for data-related tasks.
Posted 1 month ago
4.0 - 9.0 years
0 - 1 Lacs
Ahmedabad
Work from Office
Skills & Tools: Platforms: Oracle Primavera P6/EPPM, Microsoft Project Online, Planisware, Clarity PPM Integration Tools: APIs (REST/SOAP), ETL tools (Informatica, Talend), Azure Data Factory IAM/Security: Azure AD, Okta, SAML/OAuth, RBAC, SIEM tools Data Technologies: Data Lakes (e.g., AWS S3, Azure Data Lake), SQL, Power BI/Tableau Languages: Python, SQL, PowerShell, JavaScript (for scripting and integrations) Role & responsibilities Technical Consultant EPPM Platform, Cybersecurity, and Data Integrations Role Overview: As a Technical Consultant, you will be responsible for end-to-end setup and configuration of the Enterprise Project Portfolio Management (EPPM) platform, ensuring secure, efficient, and scalable integrations with enterprise systems including Data Lakes , access control tools, and project governance frameworks. You will work at the intersection of technology, security , and project operations , enabling the business to manage project portfolios effectively. Preferred candidate profile
Posted 1 month ago
6.0 - 11.0 years
25 - 30 Lacs
Bengaluru
Hybrid
Mandatory Skills : Data engineer , AWS Athena, AWS Glue,Redshift,Datalake,Lakehouse,Python,SQL Server Must Have Experience: 6+ years of hands-on data engineering experience Expertise with AWS services: S3, Redshift, EMR, Glue, Kinesis, DynamoDB Building batch and real-time data pipelines Python, SQL coding for data processing and analysis Data modeling experience using Cloud based data platforms like Redshift, Snowflake, Databricks Design and Develop ETL frameworks Nice-to-Have Experience : ETL development using tools like Informatica, Talend, Fivetran Creating reusable data sources and dashboards for self-service analytics Experience using Databricks for Spark workloads or Snowflake Working knowledge of Big Data Processing CI/CD setup Infrastructure-as-code implementation Any one of the AWS Professional Certification
Posted 1 month ago
9.0 - 14.0 years
55 - 60 Lacs
Bengaluru
Hybrid
Dodge Position Title: Technology Lead STG Labs Position Title: Location: Bangalore, India About Dodge Dodge Construction Network exists to deliver the comprehensive data and connections the construction industry needs to build thriving communities. Our legacy is deeply rooted in empowering our customers with transformative insights, igniting their journey towards unparalleled business expansion and success. We serve decision-makers who seek reliable growth and who value relationships built on trust and quality. By combining our proprietary data with cutting-edge software, we deliver to our customers the essential intelligence needed to excel within their respective landscapes. We propel the construction industry forward by transforming data into tangible guidance, driving unparalleled advancement. Dodge is the catalyst for modern construction. https://www.construction.com/ About Symphony Technology Group (STG) STG is a Silicon Valley (California) based private equity firm that has a long and successful track record of transforming high potential software and software-enabled services companies, as well as insights-oriented companies into definitive market leaders. The firm brings expertise, flexibility, and resources to build strategic value and unlock the potential of innovative companies. Partnering to build customer-centric, market winning portfolio companies, STG creates sustainable foundations for growth that bring value to all existing and future stakeholders. The firm is dedicated to transforming and building outstanding technology companies in partnership with world class management teams. With over $5.0 billion in assets under management, including a recently raised $2.0 billion fund. STGs expansive portfolio has consisted of more than 30 global companies. STG Labs is the incubation center for many of STG’s portfolio companies, building their engineering, professional services, and support delivery teams in India. STG Labs offers an entrepreneurial start-up environment for software and AI engineers, data scientists and analysts, project and product managers and provides a unique opportunity to work directly for a software or technology company. Based in Bangalore, STG Labs supports hybrid working. https://stg.com Roles and Responsibilities Lead the design, deployment, and management of data mart and analytics infrastructure leveraging AWS services Implement and manage CI/CD pipelines using industry-leading DevOps practices and tools Design, implement, and oversee API architecture, ensuring robust, scalable, and secure REST API development using AWS API Gateway Collaborate closely with data engineers, architects, and analysts to design highly performant and scalable data solutions. Mentor and guide engineering teams, fostering a culture of continuous learning and improvement. Optimize cloud resources for cost-efficiency, scalability, and reliability. Establish best practices and standards for AWS infrastructure, DevOps processes, API design, and data analytics workflows. Qualifications Hands-on working knowledge and experience is required in: Data Structures Memory Management Basic Algos (Search, Sort, etc) AWS Data Services: Redshift, Glue, EMR, Athena, Lake Formation, Lambda Infrastructure-as-Code Tools: Terraform, AWS CloudFormation Scripting Languages: Python, Bash, SQL DevOps Tooling: Docker, Kubernetes, Jenkins, Bitbucket - must be comfortable in CLI / terminal environments. Command Line / Terminal Environments AWS Security Best Practices Scalable Data Marts, Analytics Systems, and RESTful APIs Hands-on working knowledge and experience is preferred in: Container Orchestration: Kubernetes, EKS Data Visualization & Warehousing: Tableau, Data Warehouse Machine Learning & Big Data Pipelines Certifications Preferred : AWS Certifications (Solutions Architect Professional, DevOps Engineer) (Preferred Skill).
Posted 1 month ago
10.0 - 17.0 years
9 - 19 Lacs
Bengaluru
Remote
Azure Data Engineer Skills Req: Azure Data Engineer Big Data , hadoop Develop and maintain Data Pipelines using Azure services like Data Factory PysparkSynapse, Data Bricks Adobe,Spark Scala etc
Posted 1 month ago
7.0 - 9.0 years
1 - 6 Lacs
Bengaluru
Work from Office
Designation: Data Engineer Job Location: Bangalore About Digit Insurance : Digits mission is to ‘Make Insurance, Simple’. We are backed by Fairfax- one of the largest global investment firms. We have also been ranked as 'LinkedIn top 5 startup of 2018’ and 2019 and are the fastest growing insurance company. We have also been certified as a Great Place to Work! Digit has entered the Unicorn club with a valuation of $1.9 billion and while most companies take about a decade to get here, we have achieved this in just 3 years. And we truly believe that this has happened as a result of the mission of the company i.e. to make insurance simple along with the sheer hard work & endeavors of our employees. We are re-imagining products and redesigning processes to provide simple and transparent insurance solutions, that matter to consumers. We are building a technology-driven platform that can offer customized products at reduced cost and provide a great customer service. We are also the only cloud based Insure-tech company, with a very focused approach towards in house development. We are using new age Technologies like Java Microservices, Full Stack, Angular 6, Python, React Native, DB2, Machine Learning, Data Science, cloud native architecture in AWS, Azure. We are headquartered in Bangalore, Pune, Trivandrum and across India. The team is a great mix of Industry veterans who know what’s working and new age technology specialists who know what could be improved. What are we looking for ? We are looking for candidates to join us as a part of our Data Science team as Data Engineer. Total Experience range? 5 to 8 years of experience in SQL, Python scripting any cloud technologies. Skill Set: Strong proficient in coding specially in Python or any other scripting language. Working knowledge on Linux OS or shell scripting Hands on experience in SQL Working knowledge on any cloud technologies. Exposure to Data-Lake concepts. Roles and Responsibilities : Responsible for End-to-End development of projects which includes understanding requirement, designing solution, implementing, testing and maintenance. Responsible for resolving issues that might occur in existing solutions. Responsible for optimization of existing solutions to save time and resources. Role & responsibilities Preferred candidate profile
Posted 1 month ago
3.0 - 6.0 years
7 - 9 Lacs
Jaipur, Bengaluru
Work from Office
We are seeking a skilled Data Engineer to join ,The ideal candidate will have strong experience with Databricks, Python, SQL, Spark, PySpark, DBT, and AWS, and a solid understanding of big data ecosystems, data lake architecture, and data modeling.
Posted 1 month ago
7.0 - 12.0 years
3 - 7 Lacs
Gurugram
Work from Office
AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. AtAHEAD, we prioritize creating a culture of belonging,where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer,anddo not discriminatebased onan individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, maritalstatus,or any other protected characteristic under applicable law, whether actual or perceived. We embraceall candidatesthatwillcontribute to the diversification and enrichment of ideas andperspectives atAHEAD. AHEAD is looking for a Sr. Data Engineer (L3 support) to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients advanced analytics, data science, and other data engineering initiatives. This consultant will build and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. The appropriate candidate must be a subject matter expert in managing data platforms. Responsibilities: A Sr. Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as EventHubs, ADF and other cloud native tools as required to address streaming use cases Engineers and maintain ELT processes for loading data lake (Cloud Storage, data lake gen2) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and escalations and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Should possess ownership and leadership skills to collaborate effectively with Level 1 and Level 2 teams. Must have experience in raising tickets with Microsoft and engaging with them to address any service or tool outages in production. Qualifications: 7+ years of professional technical experience 5+ years of hands-on Data Architecture and Data Modelling SME level 5+ years of experience building highly scalable data solutions using Azure data factory, Spark, Databricks, Python 5+ years of experience working in cloud environments (AWS and/or Azure) 3+ years of programming languages such as Python, Spark and Spark SQL. Should have strong knowledge on architecture of ADF and Databricks. Able to work with Level1 and Level 2 teams to resolve platform outages in production environments. Strong client-facing communication and facilitation skills Strong sense of urgency, ability to set priorities and perform the job with little guidance Excellent written and verbal interpersonal skills and the ability to build and maintain collaborative and positive working relationships at all levels Strong interpersonal and communication skills (Written and oral) required Should be able to work in shifts Should have knowledge on azure Dev Ops process. Key Skills: Azure Data Factory, Azure Data bricks, Python, ETL/ELT, Spark, Data Lake, Data Engineering, EventHubs, Azure delta, Spark streaming Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include - Medical, Dental, and Vision Insurance - 401(k) - Paid company holidays - Paid time off - Paid parental and caregiver leave - Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (OTE) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidates relevant experience, qualifications, and geographic location.
Posted 1 month ago
5.0 - 10.0 years
20 - 30 Lacs
Hyderabad
Remote
Hiring for TOP MNC for Data Modeler positon (Long term contract - 2+ Years) The Data Modeler designs and implements data models for Microsoft Fabric and Power BI, supporting the migration from Oracle/Informatica. This offshore role ensures optimized data structures for performance and reporting needs. The successful candidate will bring expertise in data modeling and a collaborative approach. Responsibilities Develop conceptual, logical, and physical data models for Microsoft Fabric and Power BI solutions. Implement data models for relational, dimensional, and data lake environments on target platforms. Collaborate with the Offshore Data Engineer and Onsite Data Modernization Architect to ensure model alignment. Define and govern data modeling standards, tools, and best practices. Optimize data structures for query performance and scalability. Provide updates on modeling progress and dependencies to the Offshore Project Manager. Skills Bachelors or masters degree in computer science, data science, or a related field. 5+ years of data modeling experience with relational and NoSQL platforms. Proficiency with modeling tools (e.g., Erwin, ER/Studio) and SQL. Experience with Microsoft Fabric, data lakes, and BI data structures. Strong analytical and communication skills for team collaboration. Attention to detail with a focus on performance and consistency. management, communication, and presentation
Posted 1 month ago
3.0 - 5.0 years
15 - 27 Lacs
Bengaluru
Work from Office
Job Summary The NetApp Keystone team is responsible for cutting-edge technologies that enable NetApp’s pay as you go offering. Keystone helps customers manage data on prem or in the cloud and have invoices that are charged in a subscription manner.As an engineer in the NetApp’s Keystone organization, you will be executing our most challenging and complex projects. You will be responsible for decomposing complex product requirements into simple solutions, understanding system interdependencies and limitations and engineering best practices. Job Requirements • Strong knowledge of Python programming language, paradigms, constructs, and idioms • Bachelor’s/master’s degree in computer science, information technology, or engineering/ or anything specific that you prefer • Knowledge of various Python frameworks and tools • 2+ year experience working with the Python programming language • Strong written and communication skills with proven fluency in English • Be proficient in writing code for backend and front end • Familiarity with database technologies such as NoSQL, Prometheus and datalake • Hands-on experience with code conversion tools like Git, • Passionate about learning new tools, languages, philosophies, and workflows • Working with generated code and code generation techniques • Knowledge of software development methodologies - SCRUM/AGILE/LEAN • Knowledge of software deployment - Docker/Kubernetes • Knowledge of software team tools - GIT/JIRA/CICD Education Minimum of 2 to 4 years experience required with B.Tech or M.Tech background
Posted 1 month ago
5.0 - 8.0 years
25 - 35 Lacs
Gurugram, Bengaluru
Hybrid
Role & responsibilities Work with data product managers, analysts, and data scientists to architect, build and maintain data processing pipelines in SQL or Python. Build and maintain a data warehouse / data lake-house for analytics, reporting and ML predictions. Implement DataOps and related DevOps focused on creating ETL pipelines for data analytics / reporting, and ELT pipelines for model training. Support, optimise and transition our current processes to ensure well architected implementations and best practices. Work in an agile environment within a collaborative agile product team using Kanban Collaborate across departments and work closely with data science teams and with business (economists/data) analysts in refining their data requirements for various initiatives and data consumption requirements. Educate and train colleagues such as data scientists, analysts, and stakeholders in data pipelining and preparation techniques, which make it easier for them to integrate and consume the data they need for their own use cases. Participate in ensuring compliance and governance during data use, to ensure that the data users and consumers use the data provisioned to them responsibly through data governance and compliance initiatives. Become a data and analytics evangelist, and promote the available data and analytics capabilities and expertise to business unit leaders, and educate them in leveraging these. Preferred candidate profile What you'll need to be successful 8+ years of professional experience with data processing environments used in large scale digital applications. Extensive experience with programming in Python, Spark( SparkSQL) and SQL Experience with warehouse technologies such as Snowflake, and data modelling, lineage and data governance tools such as Alation. Professional experience of designing, building and managing bespoke data pipelines (including ETL, ELT and lambda architectures), using technologies such as Apache Airflow, Snowflake, Amazon Athena, AWS Glue, Amazon EMR, or other equivalent. Strong, fundamental technical expertise in cloud-native technologies, such as serverless functions, API gateway, relational and NoSQL databases, and caching. Experience in leading / mentoring data engineering teams. Experience in working in teams with data scientists and ML engineers, for building automated pipelines for data pre-processing and feature extraction. An advanced degree in software / data engineering, computer / information science, or a related quantitative field or equivalent work experience. Strong verbal and written communication skills and ability to work well with a wide range of stakeholders. Strong ownership, scrappy and biassed for action. Perks and benefits
Posted 1 month ago
5.0 - 7.0 years
15 - 25 Lacs
Chennai
Work from Office
Job Summary: We are seeking a skilled Big Data Tester & Developer to design, develop, and validate data pipelines and applications on large-scale data platforms. You will work on data ingestion, transformation, and testing workflows using tools from the Hadoop ecosystem and modern data engineering stacks. Experience - 6-12 years Key Responsibilities: • Develop and test Big Data pipelines using Spark, Hive, Hadoop, and Kafka • Write and optimize PySpark/Scala code for data processing • Design test cases for data validation, quality, and integrity • Automate testing using Python/Java and tools like Apache Nifi, Airflow, or DBT • Collaborate with data engineers, analysts, and QA teams Key Skills: • Strong hands-on experience in Big Data tools: Spark, Hive, HDFS, Kafka • Proficient in PySpark, Scala, or Java • Experience in data testing, ETL validation, and data quality checks • Familiarity with SQL, NoSQL, and data lakes • Knowledge of CI/CD, Git, and automation frameworks We are looking for a skilled PostgreSQL Developer/DBA to design, implement, optimize, and maintain our PostgreSQL database systems. You will work closely with developers and data teams to ensure high performance, scalability, and data integrity. Experience - 6 to 12 years Key Responsibilities: • Develop complex SQL queries, stored procedures, and functions • Optimize query performance and database indexing • Manage backups, replication, and security • Monitor and tune database performance • Support schema design and data migrations Key Skills: • Strong hands-on experience with PostgreSQL • Proficient in SQL, PL/pgSQL scripting • Experience in performance tuning, query optimization, and indexing • Familiarity with logical replication, partitioning, and extensions • Exposure to tools like pgAdmin, psql, or PgBouncer
Posted 1 month ago
5.0 - 10.0 years
10 - 15 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Hybrid
Designation : Azure Data Engineer Experience : 5+ Years Location: Chennai, Bangalore, Pune, Mumbai Notice Period: Immediate Joiners/ Serving Notice Period Shift Timing: 3:30 PM IST to 12:30 AM IST Job Description : Azure Data Engineer: Must Have Azure Data Bricks, Azure Data Factory, Spark SQL with analytical knowledge Years 6-7 years of development experience in data engineering skills Strong experience in Spark. Understand complex data system by working closely with engineering and product teams Develop scalable and maintainable applications to extract, transform, and load data in various formats to SQL Server, Hadoop Data Lake or other data storage locations. Sincerely, Sonia HR Recruiter Talent Sketchers
Posted 1 month ago
5.0 - 9.0 years
20 - 30 Lacs
Pune
Hybrid
Job Summary : We are looking for a highly skilled AWS Data Engineer with over 5 years of experience in designing, developing, and maintaining scalable data pipelines on AWS. The ideal candidate will be proficient in data engineering best practices and cloud-native technologies, with hands-on experience in building ETL/ELT pipelines, working with large datasets, and optimizing data architecture for analytics and business intelligence. Key Responsibilities : Design, build, and maintain scalable and robust data pipelines and ETL processes using AWS services (e.g., Glue, Lambda, EMR, Redshift, S3, Athena). Collaborate with data analysts, data scientists, and stakeholders to understand data requirements and deliver high-quality solutions. Implement data lake and data warehouse architectures, ensuring data governance, data quality, and compliance. Optimize data pipelines for performance, reliability, scalability, and cost. Automate data ingestion and transformation workflows using Python, PySpark, or Scala. Manage and monitor data infrastructure including logging, error handling, alerting, and performance metrics. Leverage infrastructure-as-code tools like Terraform or AWS CloudFormation for infrastructure deployment. Ensure security best practices are implemented for data access and storage (IAM, KMS, encryption, etc.). Document data processes, architectures, and standards. Required Qualifications : Bachelors or Master’s degree in Computer Science, Information Systems, or a related field. Minimum 5 years of experience as a Data Engineer with a focus on AWS cloud services. Strong experience in building ETL/ELT pipelines using AWS Glue, EMR, Lambda , and Step Functions . Proficiency in SQL , Python , PySpark , and data modeling techniques. Experience working with data lakes (S3) and data warehouses (Redshift, Snowflake, etc.) . Experience with Athena , Kinesis , Kafka , or similar streaming data tools is a plus. Familiarity with DevOps and CI/CD processes, using tools like Git , Jenkins , or GitHub Actions . Understanding of data privacy, governance, and compliance standards such as GDPR, HIPAA, etc. Strong problem-solving and analytical skills, with the ability to work in a fast-paced environment.
Posted 1 month ago
9.0 - 12.0 years
25 - 40 Lacs
Hyderabad
Work from Office
Job Description: GCP Cloud Architect Opportunity: We are seeking a highly skilled and experienced GCP Cloud Architect to join our dynamic technology team. You will play a crucial role in designing, implementing, and managing our Google Cloud Platform (GCP) infrastructure, with a primary focus on building a robust and scalable Data Lake in BigQuery. You will be instrumental in ensuring the reliability, security, and performance of our cloud environment, supporting critical healthcare data initiatives. This role requires strong technical expertise in GCP, excellent problem-solving abilities, and a passion for leveraging cloud technologies to drive impactful solutions within the healthcare domain. Responsibilities: Cloud Architecture & Design: Design and architect scalable, secure, and cost-effective GCP solutions, with a strong emphasis on BigQuery for our Data Lake. Define and implement best GCP infrastructure management, security, networking, and data governance practices. Develop and maintain comprehensive architectural diagrams, documentation, and standards. Collaborate with data engineers, data scientists, and application development teams to understand their requirements and translate them into robust cloud solutions. Evaluate and recommend new GCP services and technologies to optimize our cloud environment. Understand and implement the fundamentals of GCP, including resource hierarchy, projects, organizations, and billing. GCP Infrastructure Management: Manage and maintain our existing GCP infrastructure, ensuring high availability, performance, and security. Implement and manage infrastructure-as-code (IaC) using tools like Terraform or Cloud Deployment Manager. Monitor and troubleshoot infrastructure issues, proactively identifying and resolving potential problems. Implement and manage backup and disaster recovery strategies for our GCP environment. Optimize cloud costs and resource utilization, including BigQuery slot management. Collaboration & Communication: Work closely with cross-functional teams, including data engineering, data science, application development, security, and compliance. Communicate technical concepts and solutions effectively to both technical and non-technical stakeholders. Provide guidance and mentorship to junior team members. Participate in on-call rotation as needed. Develop and maintain thorough and reliable documentation of all cloud infrastructure processes, configurations, and security protocols. Qualifications: Bachelor's degree in Computer Science, Engineering, or a related field. Minimum of 5-8 years of experience in designing, implementing, and managing cloud infrastructure, with a strong focus on Google Cloud Platform (GCP). Proven experience in architecting and implementing Data Lakes on GCP, specifically using BigQuery. Hands-on experience with ETL/ELT processes and tools, with strong proficiency in Google Cloud Composer (Apache Airflow). Solid understanding of GCP services such as Compute Engine, Cloud Storage, Networking (VPC, Firewall Rules, Cloud DNS), IAM, Cloud Monitoring, and Cloud Logging. Experience with infrastructure-as-code (IaC) tools like Terraform or Cloud Deployment Manager. Strong understanding of security best practices for cloud environments, including identity and access management, data encryption, and network security. Excellent problem-solving, analytical, and troubleshooting skills. Strong communication, collaboration, and interpersonal skills. Bonus Points: Experience with Apigee for API management. Experience with containerization technologies like Docker and orchestration platforms like Cloud Run. Experience with Vertex AI for machine learning workflows on GCP. Familiarity with GCP Healthcare products and solutions (e.g., Cloud Healthcare API). Knowledge of healthcare data standards and regulations (e.g., HIPAA, HL7, FHIR). GCP Professional Architect certification. Experience with scripting languages (e.g., Python, Bash). Experience with Looker.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
31458 Jobs | Dublin
Wipro
16542 Jobs | Bengaluru
EY
10788 Jobs | London
Accenture in India
10711 Jobs | Dublin 2
Amazon
8660 Jobs | Seattle,WA
Uplers
8559 Jobs | Ahmedabad
IBM
7988 Jobs | Armonk
Oracle
7535 Jobs | Redwood City
Muthoot FinCorp (MFL)
6170 Jobs | New Delhi
Capgemini
6091 Jobs | Paris,France