Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 7.0 years
5 - 15 Lacs
Mumbai, Mumbai (All Areas)
Work from Office
We are seeking a highly skilled Data Engineer with a strong background in Snowflake and Azure Data Factory (ADF) , and solid experience in Python and SQL . The ideal candidate will play a critical role in designing and building robust, scalable data pipelines, enabling modern cloud-based data platforms including data warehouses and data lakes . Key Responsibilities Design, develop, and maintain scalable ETL/ELT pipelines using Snowflake , ADF , and Python to support data warehouse and datalake architectures. Build and automate data ingestion pipelines from various structured and semi-structured sources (APIs, flat files, cloud storage, databases) into Snowflake-based data lakes and data warehouses . Perform full-cycle data migration from on-premise and cloud databases (e.g., Oracle, SQL Server, Redshift, MySQL) to Snowflake . Optimize Snowflake workloads: schema design, clustering, partitioning, materialized views, and query performance tuning . Develop and orchestrate data workflows using Azure Data Factory pipelines, triggers, and dataflows. Implement data quality checks , validation processes, and monitoring mechanisms for production pipelines. Collaborate with cross-functional teams including Data Scientists, Analysts, and DevOps to support diverse data needs. Ensure data integrity, security, and governance throughout the data lifecycle. Maintain comprehensive documentation on pipeline design, schema changes, and architectural decisions. Required Skills & Qualifications Bachelors degree in Computer Science, Information Technology, or a related field. 2+ years of hands-on experience with Snowflake , including Snowflake SQL,SnowSQL, Snowpipe, Streams, Tasks, and performance optimization. 1+ year of experience with Azure Data Factory (ADF) – pipeline design,linked services, datasets, triggers, and integration runtime. Strong Python skills for scripting, automation, and data manipulation. Advanced SQL skills – ability to write efficient, complex queries, procedures, and analytical expressions. Experience designing and implementing data lakes and data warehouses on cloud platforms. Familiarity with Azure cloud services , including Azure Data Lake Storage (ADLS), Blob Storage, Azure SQL, and Azure DevOps. Experience with orchestration tools such as Airflow, DBT, or Prefect is a plus. Understanding of data modeling , data warehousing principles , and ETL/ELT best practices . Experience in building scalable data architectures for analytics and business intelligence use cases. Preferred Qualifications (Nice to Have) Experience with CI/CD pipelines for data engineering (e.g., Azure DevOps, GitHub Actions). Familiarity with Delta Lake, Parquet, or other big data formats. Knowledge of data security and governance tools like Purview or Informatica.
Posted 3 weeks ago
10.0 - 12.0 years
25 - 35 Lacs
Faridabad
Work from Office
We’re looking for a seasoned SAP Datasphere specialist to design and implement enterprise-level data models and integration pipelines. The role demands strong ETL craftsmanship using SAP native tools, with a foundational knowledge of BW systems leveraged during transitions and migrations. Roles and Responsibilities Key Responsibilities Data Pipeline Development Act as an architect for complex ETL workflows leveraging SAP Datasphere’s graphical and scripting tools. Should have worked with various sources and targets for data integration such as S/4HANA, ECC, Oracle and other third-party sources. Experience using BW Bridge and using standard BW datasources with Datasphere. Ensure data replication, federation, and virtualization use cases are optimally addressed. Modeling & Governance Design and maintain business-oriented semantic layers within Datasphere—creating abstracted, reusable data models and views tailored for analytics consumption. Apply rigorous data governance, lineage tracking, and quality frameworks. Performance & Operations Should be able to design highly optimized and performant data models that perform well under heavy data volume. Continuously track and enhance the performance of data pipelines and models—ensuring efficient processing and robust scalability. Manage workspace structures, access controls, and overall system hygiene. Team Collaboration & Mentorship Collaborate with IT, analytics, and business teams to operationalize data requirements. Coach junior engineers and drive standardized practices across the team Must-Have Qualifications Bachelor’s degree in computer science, Information Systems, or related field. Minimum 8 years in SAP data warehousing, including exposure to BW/BW4HANA. At least 2 years of hands-on experience in SAP Datasphere for ETL, modeling, and integration. Expertise in SQL and scripting (Python). Solid understanding of data governance, lineage, security, and metadata standards. Must be aware of ongoing and rapid changes in SAP landscape such as introduction of BDC and Databricks to SAP Data and Analytics. Nice-to-Have Certifications in SAP Datasphere, BW/4HANA, or data engineering. Knowledge of data virtualization, federation architectures, and hybrid cloud deployments. Experience with Agile or DevOps practices, CI/CD pipelines.
Posted 3 weeks ago
5.0 - 8.0 years
3 - 7 Lacs
Mumbai
Work from Office
Role Purpose The purpose of this role is to interpret data and turn into information (reports, dashboards, interactive visualizations etc) which can offer ways to improve a business, thus affecting business decisions. Do 1. Managing the technical scope of the project in line with the requirements at all stages a. Gather information from various sources (data warehouses, database, data integration and modelling) and interpret patterns and trends b. Develop record management process and policies c. Build and maintain relationships at all levels within the client base and understand their requirements. d. Providing sales data, proposals, data insights and account reviews to the client base e. Identify areas to increase efficiency and automation of processes f. Set up and maintain automated data processes g. Identify, evaluate and implement external services and tools to support data validation and cleansing. h. Produce and track key performance indicators 2. Analyze the data sets and provide adequate information a. Liaise with internal and external clients to fully understand data content b. Design and carry out surveys and analyze survey data as per the customer requirement c. Analyze and interpret complex data sets relating to customers business and prepare reports for internal and external audiences using business analytics reporting tools d. Create data dashboards, graphs and visualization to showcase business performance and also provide sector and competitor benchmarking e. Mine and analyze large datasets, draw valid inferences and present them successfully to management using a reporting tool f. Develop predictive models and share insights with the clients as per their requirement.
Posted 3 weeks ago
3.0 - 5.0 years
5 - 9 Lacs
Ahmedabad
Work from Office
Role Purpose The purpose of this role is to interpret data and turn into information (reports, dashboards, interactive visualizations etc) which can offer ways to improve a business, thus affecting business decisions. Do 1. Managing the technical scope of the project in line with the requirements at all stages a. Gather information from various sources (data warehouses, database, data integration and modelling) and interpret patterns and trends b. Develop record management process and policies c. Build and maintain relationships at all levels within the client base and understand their requirements. d. Providing sales data, proposals, data insights and account reviews to the client base e. Identify areas to increase efficiency and automation of processes f. Set up and maintain automated data processes g. Identify, evaluate and implement external services and tools to support data validation and cleansing. h. Produce and track key performance indicators 2. Analyze the data sets and provide adequate information a. Liaise with internal and external clients to fully understand data content b. Design and carry out surveys and analyze survey data as per the customer requirement c. Analyze and interpret complex data sets relating to customers business and prepare reports for internal and external audiences using business analytics reporting tools d. Create data dashboards, graphs and visualization to showcase business performance and also provide sector and competitor benchmarking e. Mine and analyze large datasets, draw valid inferences and present them successfully to management using a reporting tool f. Develop predictive models and share insights with the clients as per their requirement.
Posted 3 weeks ago
4.0 - 9.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Role Purpose The purpose of this role is to design, test and maintain software programs for operating systems or applications which needs to be deployed at a client end and ensure its meet 100% quality assurance parameters Must have technical skills: 4 years+ onSnowflake advanced SQL expertise 4 years+ on data warehouse experiences hands on knowledge with the methods to identify, collect, manipulate, transform, normalize, clean, and validate data, star schema, normalization / denormalization, dimensions, aggregations etc, 4+ Years experience working in reporting and analytics environments development, data profiling, metric development, CICD, production deployment, troubleshooting, query tuning etc, 3 years+ on Python advanced Python expertise 3 years+ on any cloud platform AWS preferred hands on experience on AWS on Lambda, S3, SNS / SQS, EC2 is bare minimum, 3 years+ on any ETL / ELT tool Informatica, Pentaho, Fivetran, DBT etc. 3+ years with developing functional metrics in any specific business vertical (finance, retail, telecom etc), Must have soft skills: Clear communication written and verbal communication, especially with time off, delays in delivery etc. Team Player Works in the team and works with the team, Enterprise Experience Understands and follows enterprise guidelines for CICD, security, change management, RCA, on-call rotation etc, Nice to have: Technical certifications from AWS, Microsoft, Azure, GCP or any other recognized Software vendor, 4 years+ on any ETL / ELT tool Informatica, Pentaho, Fivetran, DBT etc. 4 years+ with developing functional metrics in any specific business vertical (finance, retail, telecom etc), 4 years+ with team lead experience, 3 years+ in a large-scale support organization supporting thousands ofusers
Posted 3 weeks ago
5.0 - 10.0 years
10 - 16 Lacs
Hyderabad
Remote
Job description As an ETL Developer for the Data and Analytics team, at Guidewire you will participate and collaborate with our customers and SI Partners who are adopting our Guidewire Data Platform as the centerpiece of their data foundation. You will facilitate and be an active developer when necessary to operationalize the realization of the agreed upon ETL Architecture goals of our customers adhering to Guidewire best practices and standards. You will work with our customers, partners, and other Guidewire team members to deliver successful data transformation initiatives. You will utilize best practices for design, development, and delivery of customer projects. You will share knowledge with the wider Guidewire Data and Analytics team to enable predictable project outcomes and emerge as a leader in our thriving data practice. One of our principles is to have fun while we deliver, so this role will need to keep the delivery process fun and engaging for the team in collaboration with the broader organization. Given the dynamic nature of the work in the Data and Analytics team, we are looking for decisive, highly-skilled technical problem solvers who are self-motivated and take proactive actions for the benefit of our customers and ensure that they succeed in their journey to Guidewire Cloud Platform. You will collaborate closely with teams located around the world and adhere to our core values Integrity, Collegiality, and Rationality. Key Responsibilities: Build out technical processes from specifications provided in High Level Design and data specifications documents. Integrate test and validation processes and methods into every step of the development process Work with Lead Architects and provide inputs into defining user stories, scope, acceptance criteria and estimates. Systematic problem-solving approach, coupled with a sense of ownership and drive Ability to work independently in a fast-paced Agile environment Actively contribute to the knowledge base from every project you are assigned to. Qualifications: Bachelors or Masters Degree in Computer Science, or equivalent level of demonstrable professional competency, and 3 - 5 years + in a technical capacity building out complex ETL Data Integration frameworks. 3+ years of Experience with data processing and ETL (Extract, Transform, Load) and ELT (Extract, Load, and Transform) concepts. Experience with ADF or AWS Glue, Spark/Scala, GDP, CDC, ETL Data Integration, Experience working with relational and/or NoSQL databases Experience working with different cloud platforms (such as AWS, Azure, Snowflake, Google Cloud, etc.) Ability to work independently and within a team. Nice to have: Insurance industry experience Experience with ADF or AWS Glue Experience with the Azure data factory, Spark/Scala Experience with the Guidewire Data Platform.
Posted 3 weeks ago
8.0 - 12.0 years
15 - 30 Lacs
Gurugram
Work from Office
Role description Lead and mentor a team of data engineers to design, develop, and maintain high-performance data pipelines and platforms. Architect scalable ETL/ELT processes, streaming pipelines, and data lake/warehouse solutions (e.g., Redshift, Snowflake, BigQuery). Own the roadmap and technical vision for the data engineering function, ensuring best practices in data modeling, governance, quality, and security. Drive adoption of modern data stack tools (e.g., Airflow, Kafka, Spark etc.) and foster a culture of continuous improvement. Ensure the platform is reliable, scalable, and cost-effective across batch and real-time use cases. Champion data observability, lineage, and privacy initiatives to ensure trust in data across the org. Skills Bachelors or Masters degree in Computer Science, Engineering, or related technical field. 8+ years of hands-on experience in data engineering with at least 2+ years in a leadership or managerial role. Proven experience with distributed data processing frameworks such as Apache Spark, Flink, or Kafka. Strong SQL skills and experience in data modeling, data warehousing, and schema design. Proficiency with cloud platforms (AWS/GCP/Azure) and their native data services (e.g., AWS Glue, Redshift, EMR, BigQuery). Solid grasp of data architecture, system design, and performance optimization at scale. Experience working in an agile development environment and managing sprint-based delivery cycles.
Posted 3 weeks ago
5.0 - 8.0 years
9 - 14 Lacs
Mumbai
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreement Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Mandatory Skills: Informatica MDM. Experience: 5-8 Years.
Posted 3 weeks ago
12.0 - 17.0 years
14 - 19 Lacs
Hyderabad
Work from Office
Responsibilities Data engineering lead role for D&Ai data modernization (MDIP) Ideally Candidate must be flexible to work an alternative schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon coverage requirements of the job. The can didate can work with immediate supervisor to change the work schedule on rotational basis depending on the product and project requirements. Responsibilities Manage a team of data engineers and data analysts by delegating project responsibilities and managing their flow of work as well as empowering them to realize their full potential. Design, structure and store data into unified data models and link them together to make the data reusable for downstream products. Manage and scale data pipelines from internal and external data sources to support new product launches and drive data quality across data products. Create reusable accelerators and solutions to migrate data from legacy data warehouse platforms such as Teradata to Azure Databricks and Azure SQL. Enable and accelerate standards-based development prioritizing reuse of code, adopt test-driven development, unit testing and test automation with end-to-end observability of data Build and own the automation and monitoring frameworks that captures metrics and operational KPIs for data pipeline quality, performance and cost. Collaborate with internal clients (product teams, sector leads, data science teams) and external partners (SI partners/data providers) to drive solutioning and clarify solution requirements. Evolve the architectural capabilities and maturity of the data platform by engaging with enterprise architects to build and support the right domain architecture for each application following well-architected design standards. Define and manage SLAs for data products and processes running in production. Create documentation for learnings and knowledge transfer to internal associates. Qualifications 12+ years of engineering and data management experience Qualifications 12+ years of overall technology experience that includes at least 5+ years of hands-on software development, data engineering, and systems architecture. 8+ years of experience with Data Lakehouse, Data Warehousing, and Data Analytics tools. 6+ years of experience in SQL optimization and performance tuning on MS SQL Server, Azure SQL or any other popular RDBMS 6+ years of experience in Python/Pyspark/Scala programming on big data platforms like Databricks 4+ years in cloud data engineering experience in Azure or AWS. Fluent with Azure cloud services. Azure Data Engineering certification is a plus. Experience with integration of multi cloud services with on-premises technologies. Experience with data modelling, data warehousing, and building high-volume ETL/ELT pipelines. Experience with data profiling and data quality tools like Great Expectations. Experience building/operating highly available, distributed systems of data extraction, ingestion, and processing of large data sets. Experience with at least one business intelligence tool such as Power BI or Tableau Experience with running and scaling applications on the cloud infrastructure and containerized services like Kubernetes. Experience with version control systems like ADO, Github and CI/CD tools for DevOps automation and deployments. Experience with Azure Data Factory, Azure Databricks and Azure Machine learning tools. Experience with Statistical/ML techniques is a plus. Experience with building solutions in the retail or in the supply chain space is a plus. Understanding of metadata management, data lineage, and data glossaries is a plus. BA/BS in Computer Science, Math, Physics, or other technical fields. Candidate must be flexible to work an alternative work schedule either on tradition work week from Monday to Friday; or Tuesday to Saturday or Sunday to Thursday depending upon product and project coverage requirements of the job. Candidates are expected to be in the office at the assigned location at least 3 days a week and the days at work needs to be coordinated with immediate supervisor Skills, Abilities, Knowledge: Excellent communication skills, both verbal and written, along with the ability to influence and demonstrate confidence in communications with senior level management. Proven track record of leading, mentoring data teams. Strong change manager. Comfortable with change, especially that which arises through company growth. Ability to understand and translate business requirements into data and technical requirements. High degree of organization and ability to manage multiple, competing projects and priorities simultaneously. Positive and flexible attitude to enable adjusting to different needs in an ever-changing environment. Strong leadership, organizational and interpersonal skills; comfortable managing trade-offs. Foster a team culture of accountability, communication, and self-management. Proactively drives impact and engagement while bringing others along. Consistently attain/exceed individual and team goals. Ability to lead others without direct authority in a matrixed environment. Comfortable working in a hybrid environment with teams consisting of contractors as well as FTEs spread across multiple PepsiCo locations. Domain Knowledge in CPG industry with Supply chain/GTM background is preferred.
Posted 3 weeks ago
8.0 - 13.0 years
10 - 15 Lacs
Pune
Work from Office
The Data Science Engineering team is looking for a Lead Data Analytics Engineer to join our team! You should be and gather our requirements, understanding complex product, business, and engineering challenges, composing and prioritizing research projects, and then building them in partnership with cloud engineers and architects, and using the work of our data engineering team. You have deep SQL experience, an understanding of modern data stacks and technology, experience with data and all things data-related, and experience guiding a team through technical and design challenges. You will report into the Sr. Manager, Cloud Software Engineering and be a part of the larger Data Engineering team. What Your Responsibilities Will Be Avalara is looking for data analytics engineer who can solve and scale real world big data challenges. Have end to end analytics experience and a complex data story with data models and reliable and applicable metrics. Build and deploy data science models using complex SQL, Python, DBT data modelling and re-useable visualization component (PowerBI). Expert level experience in PowerBI, SQL and Snowflake Solve needs on a large scale by applying your software engineering and complex data. Lead and help develop a roadmap for the area and the team. Analyze fault tolerance and high availability issues, performance, and scale challenges, and solve them. Lead programs and collaborate with engineers, product managers, and technical program managers across teams. Understand the trade-offs between consistency, durability, and costs to build solutions that can meet the demands of growing services. Ensure the operational readiness of the services and meet the commitments to our customers regarding availability and performance. Manage end-to-end project plans and ensure on-time delivery. Communicate the status and big picture to the project team and management. Work with business and engineering teams to identify scope, constraints, dependencies, and risks. Identify risks and opportunities across the business and guide solutions. What You'll Need to be Successful What You'll Need to be Successful Bachelor's Engineering degree in Computer Science or a related field. 8+ years of experience of enterprise-class experience with large-scale cloud solutions in data science/analytics projects and engineering projects. Expert level experience in PowerBI, SQL and Snowflake Experience with data visualization, Python, Data Modeling and data storytelling. Experience architecting complex data marts applying DBT. Architect and build data solutions that use data quality and anomaly detection best practices. Experience building production analytics using the Snowflake data platform. Experience in AWS and Snowflake tools and services Good to have: Certificate in Snowflake is plus Relevant certifications in data warehousing or cloud platform. Experience architecting complex data marts applying DBT and Airflow.
Posted 3 weeks ago
12.0 - 17.0 years
14 - 19 Lacs
Mumbai, Maharastra
Work from Office
About the Role: Grade Level (for internal use): 12 The Team You will be an expert contributor and part of the Rating Organizations Data Services Product Engineering Team. This team, who has a broad and expert knowledge on Ratings organizations critical data domains, technology stacks and architectural patterns, fosters knowledge sharing and collaboration that results in a unified strategy. All Data Services team members provide leadership, innovation, timely delivery, and the ability to articulate business value. Be a part of a unique opportunity to build and evolve S&P Ratings next gen analytics platform. Responsibilities: Architect, design, and implement innovative software solutions to enhance S&P Ratings' cloud-based analytics platform. Mentor a team of engineers (as required), fostering a culture of trust, continuous growth, and collaborative problem-solving. Collaborate with business partners to understand requirements, ensuring technical solutions align with business goals. Manage and improve existing software solutions, ensuring high performance and scalability. Participate actively in all Agile scrum ceremonies, contributing to the continuous improvement of team processes. Produce comprehensive technical design documents and conduct technical walkthroughs. Experience & Qualifications: Bachelors degree in Computer Science, Information Systems, Engineering, or a related field is required. Proficient in software development lifecycle (SDLC) methodologies, including Agile and Test-Driven Development. A total of 12+ years of experience, with 8+ years focused on designing enterprise products, modern data architectures, and analytics platforms. 6+ years of hands-on experience in application architecture and design, with proven knowledge of software and enterprise integration design patterns, as well as full-stack development across modern distributed front-end and back-end technology stacks. 5+ years of full-stack development experience using modern web development technologies, including proficiency in programming languages and UI frameworks, as well as experience with relational and NoSQL databases. Experience in designing transactional systems, data warehouses, data lakes, and data integrations within a big data ecosystem using cloud technologies. Thorough understanding of distributed computing principles. A passionate, intelligent, and articulate developer with a quality-first mindset and a strong background in developing products for a global audience at scale. Excellent analytical thinking, interpersonal skills, and both oral and written communication skills, with a strong ability to influence both IT and business partners. Superior knowledge of system architecture, object-oriented design, and design patterns. Strong work ethic, self-starter mentality, and results-oriented approach. Excellent communication skills are essential, with strong verbal and writing proficiencies. Additional Preferred Qualifications: Experience working with cloud service providers. Familiarity with Agile frameworks, including scaled Agile methodologies. Advanced degree in Computer Science, Information Systems, or a related field. Hands-on experience in application architecture and design, with proven software and enterprise integration design principles. Ability to prioritize and manage work to meet critical project timelines in a fast-paced environment. Strong analytical and communication skills, with the ability to train and mentor others.
Posted 3 weeks ago
6.0 - 8.0 years
1 - 4 Lacs
Chennai
Hybrid
Job Title:Snowflake Developer Experience: 6-8 Years Location:Chennai - Hybrid Job Description : 3+ years of experience as a Snowflake Developer or Data Engineer. Strong knowledge of SQL, SnowSQL, and Snowflake schema design. Experience with ETL tools and data pipeline automation. Basic understanding of US healthcare data (claims, eligibility, providers, payers). Experience working with largescale datasets and cloud platforms (AWS, Azure, GCP). Familiarity with data governance, security, and compliance (HIPAA, HITECH).
Posted 3 weeks ago
4.0 - 8.0 years
6 - 10 Lacs
Hyderabad, Ahmedabad
Work from Office
The Team: The usage reporting team gathers raw usage data from various products and produces unified datasets across departmental lines within Market Intelligence. We deliver essential intelligence for both public and internal reporting purposes. The Impact: As the Lead Developer for the usage reporting team, you will play a key role in delivering essential insights for both public and private users of the S&P Global Market Intelligence platforms. Our data provides the foundation for strategy and insights that our team members depend on to deliver critical intelligence for our clients around the world. Whats in it for you: Work with a variety of subject matter experts to develop and improve data offerings. Gain exposure to a wide range of datasets and stakeholders while tackling daily challenges. Oversee the complete software development lifecycle (SDLC) from initial architecture and design to development and support for data pipelines. Responsibilities: Produce technical design documents and conduct technical walkthroughs. Build and maintain data pipelines using a variety of programming languages and data processing techniques. Be part of an agile team that designs, develops, and maintains enterprise data systems and related software applications. Participate in design sessions for new product features, data models, and capabilities. Collaborate with key stakeholders to develop system architectures, API specifications, and implementation requirements. What Were Looking For: 4-8 years of experience as a Senior Developer with strong experience in programming languages and data processing techniques. 4-10 years of experience with public cloud platforms. Experience with data processing frameworks and orchestration tools. 4-10 years of data warehousing experience. A strong self-starter with independent motivation as a software engineer. Strong leadership skills with a proven ability to collaborate effectively with engineering leadership and key stakeholders. About S&P Global Market Intelligence At S&P Global Market Intelligence, a division of S&P Global we understand the importance of accurate, deep and insightful information. Our team of experts delivers unrivaled insights and leading data and technology solutions, partnering with customers to expand their perspective, operate with confidence, andmake decisions with conviction. Our world is in transition and getting more complex by the day. We push past expected observations and seek out new levels of understanding so that we can help companies, governments and individuals make an impact on tomorrow. At S&P Global we transform data into Essential Intelligence, pinpointing risks and opening possibilities. We Accelerate Progress. At S&P Global, we focus on Powering Global Markets. Throughout our history, the world's leading organizations have relied on us for the Essential Intelligence they need to make confident decisions about the road ahead. We start with a foundation of integrity in all we do, bring a spirit of discovery to our work, and collaborate in close partnership with each other and our customers to achieve shared goals. We take care of you, so you cantake care of business. We care about our people. Thats why we provide everything youand your careerneed to thrive at S&P Global. Health & Wellness: Health care coverage designed for the mind and body. Family Friendly Perks: Its not just about you. S&P Global has perks for your partners and little ones, too, with some best-in class benefits for families.
Posted 3 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Bengaluru
Hybrid
A bachelors degree in Computer Science or a related field. 5-7 years of experience working as a hands-on developer in Sybase, DB2, ETL technologies. Worked extensively on data integration, designing, and developing reusable interfaces/ Advanced experience in Python, DB2, Sybase, shell scripting, Unix, Perl scripting, DB platforms, database design and modeling. Expert level understanding of data warehouse, core database concepts and relational database design Experience in writing stored procedures, optimization, and performance tuning Strong Technology acumen and a deep strategic mindset Proven track record of delivering results Proven analytical skills and experience making decisions based on hard and soft data A desire and openness to learning and continuous improvement, both of yourself and your team members Hands-on experience on development of APIs is a plus Good to have experience with Business Intelligence tools, Source to Pay applications such as SAP Ariba, and Accounts Payable system Skills Required: Familiarity with Postgres and Python is a plus Spend Management Technology (SMT) is seeking a Backend/Database developer Who will play an integral role in designing, implementing, and supporting data integration with New systems, data warehouse, and data extraction solutions across SMT functional areas. skills Sybase DB2 ETL Technologies Python Unix, Shell scripting, Perl BI Tools/SAP Ariba Postgres
Posted 3 weeks ago
5.0 - 10.0 years
22 - 25 Lacs
Hyderabad
Work from Office
Overview Seeking an Associate Manager, Data Operations, to support our growing data organization. In this role, you will assist in maintaining data pipelines and corresponding platforms (on-prem and cloud) while working closely with global teams on DataOps initiatives. Support the day-to-day operations of data pipelines, ensuring data governance, reliability, and performance optimization on Microsoft Azure. Hands-on experience with Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and real-time streaming architectures is preferred. Assist in ensuring the availability, scalability, automation, and governance of enterprise data pipelines supporting analytics, AI/ML, and business intelligence. Contribute to DataOps programs, aligning with business objectives, data governance standards, and enterprise data strategy. Help implement real-time data observability, monitoring, and automation frameworks to improve data reliability, quality, and operational efficiency. Support the development of governance models and execution roadmaps to enhance efficiency across Azure, AWS, GCP, and on-prem environments. Work on CI/CD integration, data pipeline automation, and self-healing capabilities to improve enterprise-wide DataOps processes. Collaborate with cross-functional teams to support and maintain next-generation Data & Analytics platforms while promoting an agile and high-performing DataOps culture. Assist in the adoption of Data & Analytics technology transformations, ensuring automation for proactive issue identification and resolution. Partner with cross-functional teams to support process improvements, best practices, and operational efficiencies within DataOps. Responsibilities Assist in the implementation and optimization of enterprise-scale data pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Databricks, and Azure Stream Analytics. Support data ingestion, transformation, orchestration, and storage workflows, ensuring data reliability, integrity, and availability. Help ensure seamless batch, real-time, and streaming data processing, focusing on high availability and fault tolerance. Contribute to DataOps automation efforts, including CI/CD for data pipelines, automated testing, and version control using Azure DevOps and Terraform. Collaborate with Data Engineering, Analytics, AI/ML, CloudOps, and Business Intelligence teams to support data-driven decision-making. Assist in aligning DataOps practices with regulatory and security requirements by working with IT, data stewards, and compliance teams. Support data operations and sustainment activities, including testing and monitoring processes for global products and projects. Participate in data capture, storage, integration, governance, and analytics efforts, working alongside cross-functional teams. Assist in managing day-to-day DataOps activities, ensuring adherence to service-level agreements (SLAs) and business requirements. Engage with SMEs and business stakeholders to ensure data platform capabilities align with business needs. Contribute to Agile work intake and execution processes, helping to maintain efficiency in data platform teams. Help troubleshoot and resolve issues related to cloud infrastructure and data services in collaboration with technical teams. Support the development and automation of operational policies and procedures, improving efficiency and resilience. Assist in incident response and root cause analysis, contributing to self-healing mechanisms and mitigation strategies. Foster a customer-centric approach, advocating for operational excellence and continuous improvement in service delivery. Help build a collaborative, high-performing team culture, promoting automation and efficiency within DataOps. Adapt to shifting priorities and support cross-functional teams in maintaining productivity and achieving business goals. Utilize technical expertise in cloud and data operations to support service reliability and scalability. Qualifications 5+ years of technology work experience in a large-scale global organization, with CPG industry experience preferred. 5+ years of experience in Data & Analytics roles, with hands-on expertise in data operations and governance. 2+ years of experience working within a cross-functional IT organization, collaborating with multiple teams. Experience in a lead or senior support role, with a focus on DataOps execution and delivery. Strong communication skills, with the ability to collaborate with stakeholders and articulate technical concepts to non-technical audiences. Analytical and problem-solving abilities, with a focus on prioritizing customer needs and operational improvements. Customer-focused mindset, ensuring high-quality service delivery and operational efficiency. Growth mindset, with a willingness to learn and adapt to new technologies and methodologies in a fast-paced environment. Experience supporting data operations in a Microsoft Azure environment, including data pipeline automation. Familiarity with Site Reliability Engineering (SRE) principles, such as monitoring, automated issue remediation, and scalability improvements. Understanding of operational excellence in complex, high-availability data environments. Ability to collaborate across teams, building strong relationships with business and IT stakeholders. Basic understanding of data management concepts, including master data management, data governance, and analytics. Knowledge of data acquisition, data catalogs, data standards, and data management tools. Strong execution and organizational skills, with the ability to follow through on operational plans and drive measurable results. Adaptability in a dynamic, fast-paced environment, with the ability to shift priorities while maintaining productivity.
Posted 3 weeks ago
6.0 - 10.0 years
13 - 18 Lacs
Mumbai
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. We are looking for a skilled Data Engineer to design, build, and maintain scalable, secure, and high-performance data solutions. This role spans the full data engineering lifecycle – from research and architecture to deployment and support- within cloud-native environments, with a strong focus on AWS and Kubernetes (EKS). Primary Responsibilities: Data Engineering Lifecycle: Lead research, proof of concept, architecture, development, testing, deployment, and ongoing maintenance of data solutions Data Solutions: Design and implement modular, flexible, secure, and reliable data systems that scale with business needs Instrumentation and Monitoring: Integrate pipeline observability to detect and resolve issues proactively Troubleshooting and Optimization: Develop tools and processes to debug, optimize, and maintain production systems Tech Debt Reduction: Identify and address legacy inefficiencies to improve performance and maintainability Debugging and Troubleshooting: Quickly diagnose and resolve unknown issues across complex systems Documentation and Governance: Maintain clear documentation of data models, transformations, and pipelines to ensure security and governance compliance Cloud Expertise: Leverage advanced skills in AWS and EKS to build, deploy, and scale cloud-native data platforms Cross-Functional Support: Collaborate with analytics, application development, and business teams to enable data-driven solutions Team Leadership: Lead and mentor engineering teams to ensure operational efficiency and innovation Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications: Bachelor’s degree in Computer Science or related field 5+ years of experience in data engineering or related roles Proven experience designing and deploying scalable, secure, high-quality data solutions Solid expertise in full Data Engineering lifecycle (research to maintenance) Advanced AWS and EKS knowledge Proficient in CI/CD, IaC, and addressing tech debt Proven skilled in monitoring and instrumentation of data pipelines Proven advanced troubleshooting and performance optimization abilities Proven ownership mindset with ability to manage multiple components Proven effective cross-functional collaborator (DS, SMEs, and external teams). Proven exceptional debugging and problem-solving skills Proven solid individual contributor with a team-first approach At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission. #njp External Candidate Application Internal Employee Application
Posted 3 weeks ago
3.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
We are looking for a Strategic thinker, have ability grasp new technologies, innovate, develop, and nurture new solutions. a self-starter to work in a diverse and fast-paced environment to support, maintain and advance the capabilities of the unified data platform. This a global role that requires partnering with the broader JLLT team at the country, regional and global level by utilizing in-depth knowledge of cloud infrastructure technologies and platform engineering experience. Responsibilities Developing ETL/ELT pipelines using Synapse pipelines and data flows Integrating Synapse with other Azure data services (Data Lake Storage, Data Factory, etc.) Building and maintaining data warehousing solutions using Synapse Design and contribute to information infrastructure and data management processes Develop data ingestion systems that cleanse and normalize diverse datasets Build data pipelines from various internal and external sources Create structure for unstructured data Develop solutions using both relational and non-relational databases Create proof-of-concept implementations to validate solution proposals Sounds like you To apply you need to be: Experience & Education Bachelors degree in Information Science, Computer Science, Mathematics, Statistics or a quantitative discipline in science, business, or social science. Worked with Cloud delivery team to provide technical solutions services roadmap for customer. Knowledge on creating IaaS and PaaS cloud solutions in Azure Platform that meet customer needs for scalability, reliability, and performance Technical Skills & Competencies Developing ETL/ELT pipelines using Synapse pipelines and data flows Integrating Synapse with other Azure data services (Data Lake Storage, Data Factory, etc.) Building and maintaining data warehousing solutions using Synapse Design and contribute to information infrastructure and data management processes Develop data ingestion systems that cleanse and normalize diverse datasets Build data pipelines from various internal and external sources Create structure for unstructured data
Posted 3 weeks ago
10.0 - 16.0 years
20 - 32 Lacs
Pune
Remote
Job Title: Associate Director Power Platform Job Date: 09-07-25 Job Location: 90% - Remote Job Type: Full Time Job Summary: Our client is a specialized digitalization solutions provider with strong presence in Europe. Established in 2019 in Germany, we are dedicated to delivering innovative solutions in the areas of Data platforms and Integrations, BI, RPA and AI. We cater to large corporates as well as SMEs, helping them maximize value from new-age technologies. We look now to create our own presence in India. If you would enjoy being among the earliest employees in India, working with leadership closely to build our Indian delivery capabilities, then this is a great opportunity for you. Position Overview: As an Associate Director - Power Platform, you will be one of our core senior India based team. You will be responsible for leading the design, implementation, and optimization of RPA and Low Code solutions leveraging the Power Platform for our high-profile European clients. You will be hands-on and work closely with clients to understand their needs, provide expert guidance, and develop solutions that integrate Power Automate with PowerBI and Power Apps. Additionally, you will play a key role in building and mentoring a team focused on Power Platform solutions, ensuring the growth and capability of our Power Platform practice. This position requires a good balance of technical and leadership capabilities, supporting the customers and at the same time, growing the India presence. Job Responsibilities: • Collaborate with large European firms to analyze business processes and identify automation opportunities using Power Automate, Power BI, and Power Apps. • Design, develop, and implement innovative solutions that integrate with various Power Platform products such as Power Automate with Power BI for data visualization and reporting. • Create custom applications using Power Apps that enhance automation and user experience. • Set up Power Platform within organizations, together with best practice governance from security, quality and performance and commercial perspective • Provide training and support to clients on Power Platform functionalities and best practices. • Help build and develop a dedicated team of Power Platform consultants, sharing knowledge and fostering a collaborative environment. • Conduct performance tuning and troubleshooting of existing solutions to ensure optimal operation. • Stay updated on the latest features and updates in Low Code and RPA space • Leverage AI tools such as Coder, Claude Code and ChatGPT to expedite project delivery • Bring thought leadership in client meetings and build customer confidence in our company • Prepare structured documentation, including process maps, technical specifications, and user guides. • Mentor junior consultants and contribute to the development of best practices within the team. Participate in project planning, estimation, and status reporting Job Requirements: Educational qualifications • Minimum bachelors degree in computer science, Information Technology, or a related field. Technical qualifications 10+ years of experience in IT, minimum 7+ years in RPA and Low-Code development, specifically Microsoft Power Platform. Proficient in creating high-quality solutions in Power Platform, leveraging all components such as Power Apps, Power Automate, Power pages, virtual agents, etc. Should have experience in implementing Power Platform governance within organizations. Strong knowledge of integrating solutions with the Microsoft 365, Dynamics 365 and Power platform portfolio Good understanding of how to roll out Power platform within organizations incl. product flavors and licenses Ability to accurately estimate effort needed and drive the pace of the project to deliver within time. Must have participated in pre-sales support and business development activities Hands-on experience on Power BI, data engineering as well as data warehousing on cloud a huge plus. Experience with REST APIs and custom integrations with variety of systems a huge plus Relevant certifications (e.g., Microsoft Certified: Power Platform Solution Architect) are a plus. Personal Attributes Experience dealing with customers at technical and non-technical level. Ability to lead conversations and structure ambiguous topics Self-starter motivation to excel without needing supervision Excellent problem-solving skills and the ability to analyze complex business requirements. Strong communication and interpersonal skills, with a focus on client engagement and satisfaction. Proven experience in team building or mentorship is a plus. Salary & Perks: Salary no bar for the right candidate. Competitive salary commensurate with experience and qualifications. Opportunity for career growth and advancement.
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
At PwC, our team in managed services specializes in providing outsourced solutions and supporting clients across various functions. We help organizations enhance their operations, reduce costs, and boost efficiency by managing key processes and functions on their behalf. Our expertise lies in project management, technology, and process optimization, allowing us to deliver high-quality services to our clients. In managed service management and strategy at PwC, the focus is on transitioning and running services, managing delivery teams, programs, commercials, performance, and delivery risk. Your role will involve continuous improvement and optimization of managed services processes, tools, and services. As a Managed Services - Data Engineer Senior Associate at PwC, you will be part of a team of problem solvers dedicated to addressing complex business issues from strategy to execution using Data, Analytics & Insights Skills. Your responsibilities will include using feedback and reflection to enhance self-awareness and personal strengths, acting as a subject matter expert in your chosen domain, mentoring junior resources, and conducting knowledge sharing sessions. You will be required to demonstrate critical thinking, ensure quality of deliverables, adhere to SLAs, and participate in incident, change, and problem management. Additionally, you will be expected to review your work and that of others for quality, accuracy, and relevance, as well as demonstrate leadership capabilities by working directly with clients and leading engagements. The primary skills required for this role include ETL/ELT, SQL, SSIS, SSMS, Informatica, and Python, with secondary skills in Azure/AWS/GCP, Power BI, Advanced Excel, and Excel Macro. As a Data Ingestion Senior Associate, you should have extensive experience in developing scalable, repeatable, and secure data structures and pipelines, designing and implementing ETL processes, monitoring and troubleshooting data pipelines, implementing data security measures, and creating visually impactful dashboards for data reporting. You should also have expertise in writing and analyzing complex SQL queries, be proficient in Excel, and possess strong communication, problem-solving, quantitative, and analytical abilities. In our Managed Services platform, we focus on leveraging technology and human expertise to deliver simple yet powerful solutions to our clients. Our team of skilled professionals, combined with advanced technology and processes, enables us to provide effective outcomes and add greater value to our clients" enterprises. We aim to empower our clients to focus on their business priorities by providing flexible access to world-class business and technology capabilities that align with today's dynamic business environment. If you are a candidate who thrives in a high-paced work environment, capable of handling critical Application Evolution Service offerings, engagement support, and strategic advisory work, then we are looking for you to join our team in the Data, Analytics & Insights Managed Service at PwC. Your role will involve working on a mix of help desk support, enhancement and optimization projects, as well as strategic roadmap initiatives, while also contributing to customer engagements from both a technical and relationship perspective.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
You will be joining our data engineering team as an experienced Python + Databricks Developer. Your role will involve designing, developing, and maintaining scalable data pipelines using Databricks and Apache Spark. You will write efficient Python code for data transformation, cleansing, and analytics. Collaboration with data scientists, analysts, and engineers to understand data needs and deliver high-performance solutions is a key part of this role. Additionally, you will optimize and tune data pipelines for performance and cost efficiency and implement data validation, quality checks, and monitoring. Working with cloud platforms, preferably Azure or AWS, to manage data workflows will also be part of your responsibilities. Ensuring best practices in code quality, version control, and documentation is essential for this role. To be successful in this position, you should have at least 5 years of professional experience in Python development and 3 years of hands-on experience with Databricks, including notebooks, clusters, Delta Lake, and job orchestration. Strong experience with Spark, particularly PySpark, is required. Proficiency in working with large-scale data processing and ETL/ELT pipelines is necessary, along with a solid understanding of data warehousing concepts and SQL. Experience with Azure Data Factory, AWS Glue, or other data orchestration tools would be advantageous. Familiarity with version control tools like Git is also desired. Excellent problem-solving and communication skills are important for this role.,
Posted 3 weeks ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
Myers-Holum is expanding the NSAW Practice and is actively seeking experienced Enterprise Architects with strong end-to-end data warehousing and business intelligence experience to play a pivotal role leading client engagements on this team. As an Enterprise Architect specializing in Data Integration and Business Intelligence, you will be responsible for leading the strategic design, architecture, and implementation of enterprise data solutions to ensure alignment with clients" long-term business goals. You will have the opportunity to develop and promote architectural visions for data integration, Business Intelligence (BI), and analytics solutions across various business functions and applications. Leveraging cutting-edge technologies such as the Oracle NetSuite Analytics Warehouse (NSAW) platform, NetSuite ERP, Suite Commerce Advanced (SCA), and other cloud-based and on-premise tools, you will design and build scalable, high-performance data warehouses and BI solutions for clients. In this role, you will lead cross-functional teams in developing data governance frameworks, data models, and integration architectures to facilitate seamless data flow across disparate systems. By translating high-level business requirements into technical specifications, you will ensure that data architecture decisions align with broader organizational IT strategies and compliance standards. Additionally, you will architect end-to-end data pipelines, integration frameworks, and governance models to enable the seamless flow of structured and unstructured data from multiple sources. Your responsibilities will also include providing thought leadership in evaluating emerging technologies, tools, and best practices for data management, integration, and business intelligence. You will oversee the deployment and adoption of key enterprise data initiatives, engage with C-suite executives and senior stakeholders to communicate architectural solutions, and lead and mentor technical teams to foster a culture of continuous learning and innovation in data management, BI, and integration. Furthermore, as part of the MHI team, you will have the opportunity to contribute to the development of internal frameworks, methodologies, and standards for data architecture, integration, and BI. By staying up to date with industry trends and emerging technologies, you will continuously evolve the enterprise data architecture to meet the evolving needs of the organization and its clients. To qualify for this role, you should possess 10+ years of relevant professional experience in data management, business intelligence, and integration architecture, along with 6+ years of experience in designing and implementing enterprise data architectures. You should have expertise in cloud-based data architectures, proficiency in data integration tools, experience with relational databases, and a strong understanding of BI platforms. Additionally, you should have hands-on experience with data governance, security, and compliance frameworks, as well as exceptional communication and stakeholder management skills. Joining Myers-Holum as an Enterprise Architect offers you the opportunity to collaborate with curious and thought-provoking minds, shape your future, and positively influence change for clients. You will be part of a dynamic team that values continuous learning, growth, and innovation, while providing stability and growth opportunities within a supportive and forward-thinking organization. If you are ready to embark on a rewarding career journey with Myers-Holum and contribute to the evolution of enterprise data architecture, we invite you to explore the possibilities and discover your true potential with us.,
Posted 3 weeks ago
3.0 - 7.0 years
0 Lacs
maharashtra
On-site
NTT DATA is looking for a Business Consulting-Technical Analyst with expertise in ETL, GCP using PySpark to join their team in Pune, Maharashtra (IN-MH), India (IN). As part of the team, your main responsibilities will include designing, implementing, and optimizing data pipelines on GCP using PySpark for efficient and scalable data processing. You will also be responsible for building and maintaining ETL workflows for extracting, transforming, and loading data into various GCP services. Moreover, you will be leveraging GCP services like BigQuery, Cloud Storage, Dataflow, and Dataproc for data storage, processing, and analysis. Your role will also involve utilizing PySpark for data manipulation, cleansing, enrichment, and validation, as well as ensuring the performance and scalability of data processing jobs on GCP. Collaboration with data scientists, analysts, and other stakeholders to understand data requirements and translate them into technical solutions is a key aspect of this role. Additionally, you will be responsible for implementing and maintaining data quality standards, security measures, and compliance with data governance policies on GCP. Troubleshooting and resolving issues related to data pipelines and infrastructure will also be part of your duties. Staying updated with the latest GCP services, PySpark features, and best practices in data engineering is essential for this role. The ideal candidate for this position should have a strong understanding of GCP services like BigQuery, Cloud Storage, Dataflow, and Dataproc, demonstrated experience in using PySpark for data processing, transformation, and analysis, solid Python programming skills, proficiency in SQL for querying and manipulating data in relational databases, experience with data modeling, ETL processes, and data warehousing concepts, understanding of big data principles and distributed computing concepts, as well as excellent communication and collaboration skills to effectively work with cross-functional teams. NTT DATA is a trusted global innovator of business and technology services, serving 75% of the Fortune Global 100. As a Global Top Employer, NTT DATA has diverse experts in more than 50 countries, providing services that include business and technology consulting, data and artificial intelligence solutions, industry solutions, as well as the development, implementation, and management of applications, infrastructure, and connectivity. NTT DATA is committed to helping clients innovate, optimize, and transform for long-term success, with a focus on digital and AI infrastructure. Being part of the NTT Group, NTT DATA invests significantly in R&D to support organizations and society in confidently transitioning into the digital future.,
Posted 3 weeks ago
6.0 - 10.0 years
20 - 25 Lacs
Bengaluru
Work from Office
Database Administration skills database software Installation Patch Management Maintenance data ETL and ELT jobs on Microsoft SQL Server & Oracle databases Azure Data Factory & Synapse data warehousing data mining database Backups and Recovery Required Candidate profile 6 - 9 yrs of exp as Database administrator Datawarehouse Data Lake SQL Development skills Tables Views Schemas Procedure functions trigger CTE Cursor security log data structure data integration
Posted 3 weeks ago
10.0 - 14.0 years
0 Lacs
noida, uttar pradesh
On-site
As a Senior ETL & Data Streaming Engineer at DataFlow Group, you will have the opportunity to utilize your extensive expertise in designing, developing, and maintaining robust data pipelines. With over 10 years of experience in the field, you will play a pivotal role in ensuring the scalability, fault-tolerance, and performance of our ETL processes. Your responsibilities will include architecting and building both batch and real-time data streaming solutions using technologies like Talend, Informatica, Apache Kafka, or AWS Kinesis. You will collaborate closely with data architects, data scientists, and business stakeholders to translate data requirements into efficient pipeline solutions and ensure data quality, integrity, and security across all storage solutions. In addition to monitoring, troubleshooting, and optimizing existing data pipelines, you will also be responsible for developing and maintaining comprehensive documentation for all ETL and streaming processes. Your role will involve implementing data governance policies and best practices within the Data Lake and Data Warehouse environments, as well as mentoring junior engineers to foster a culture of technical excellence and continuous improvement. To excel in this role, you should have a strong background in data engineering, with a focus on ETL, ELT, and data pipeline development. Your deep expertise in ETL tools, data streaming technologies, and AWS data services will be essential for success. Proficiency in SQL and at least one scripting language for data manipulation, along with strong database skills, will also be valuable assets in this position. If you are a proactive problem-solver with excellent analytical skills and strong communication abilities, this role offers you the opportunity to stay abreast of emerging technologies and industry best practices in data engineering, ETL, and streaming. Join us at DataFlow Group and be part of a team dedicated to making informed, cost-effective decisions through cutting-edge data solutions.,
Posted 3 weeks ago
10.0 - 20.0 years
25 - 40 Lacs
Bengaluru
Work from Office
Job Description Experience in SQL , database Experience in Python Scripting Experience in Banking domain, Stock loan, stock lending and borrowing. Cloud knowledge (AWS, etc)
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
40175 Jobs | Dublin
Wipro
19626 Jobs | Bengaluru
Accenture in India
17497 Jobs | Dublin 2
EY
16057 Jobs | London
Uplers
11768 Jobs | Ahmedabad
Amazon
10704 Jobs | Seattle,WA
Oracle
9513 Jobs | Redwood City
IBM
9439 Jobs | Armonk
Bajaj Finserv
9311 Jobs |
Accenture services Pvt Ltd
8745 Jobs |