Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
4.0 - 7.0 years
8 - 15 Lacs
Pune
Work from Office
•Experience with ETL testing. •Ability to create data bricks notebooks to automate the manual tests •Ability to create and run Test pipelines and interpret the results. •Ability to test complex reports and write queries to check each metrics Required Candidate profile •Experience in Azure Databricks and SQL queries - ability to analyse data in a Data Warehouse environment. •Ability to test complex reports and write queries to check each metrics.
Posted 2 months ago
4.0 - 8.0 years
0 - 1 Lacs
Bengaluru
Remote
Dear Candidate , Greetings from ADPMN!! Company profile:ADPMN was incorporated in the year 2019. However, with the undersigneds decades of experience in Software Industry, the network he has cultivated, and his technical expertise, ADPMN is bound to grow steadily and quickly. It provides quality software development services with emphasis on meeting the unique business needs of its clients. it has capacity to provide consulting services for complex projects and it handles its client needs for areas that require information technology expertise. Software and information technology applications have become part of every domain and a competent service provider in this area must have a workforce that has insight into the areas that seek application of the technology to design software, web applications and databases. For More Details: (https://adpmn.com/) Position Overview Job Title: Junior Data Engineer Experience: 4Years 8Years Location: Remote Employment Type: Full-Time Job Summary: We are seeking a highly motivated Junior Data Engineer to join our data engineering team. The ideal candidate will have foundational experience and strong knowledge of Azure cloud services , particularly with Azure Databricks , PySpark , Azure Data Factory , and SQL . You will work closely with senior data engineers and business stakeholders to build, optimize, and maintain data pipelines and infrastructure in a cloud-based environment. Strong knowledge and experience with the below Azure cloud platform Azure Databricks, Pyspark, Azure Data factory, SQL. Key Responsibilities: Design, develop, and maintain scalable ETL/ELT data pipelines using Azure Data Factory and PySpark on Azure Databricks . Collaborate with cross-functional teams to gather and understand data requirements. Implement data transformations, cleansing, and aggregations using PySpark and SQL . Monitor and troubleshoot data workflows and ensure data integrity and availability. Assist in performance tuning of data pipelines and queries. Work with Azure-based data storage solutions such as Data Lake Storage and SQL Databases . Document data flows, pipeline architecture, and technical procedures. Stay updated with the latest Azure and data engineering tools and best practices. Required Skills: Hands-on experience or strong academic understanding of the Azure cloud platform , especially Azure Databricks , Azure Data Factory , and Azure Data Lake . Solid knowledge of PySpark and distributed data processing concepts. Strong proficiency in SQL and database fundamentals. Good understanding of ETL/ELT processes and data pipeline development. Basic understanding of DevOps principles and version control (e.g., Git) is a plus. Excellent analytical, problem-solving, and communication skills. Job Location : Remote Mode of Employment: Permanent to ADPMN (C2H) Experience : 4 Years to 8 years # of Positions:5 Mode: Remote Please free to reach on +91 95425 33666 / rajendrapv@adpmn.com if you need more information! Let me know your interest along with the following details: Full Name : Date of Birth : Total Experience as Data Engineer: Relevant Experience Azure Databricks , Azure Data Factory , and Azure Data Lake : Current Company : Current Pay roll Company If any: Current Location: Current CTC : Expected CTC : Notice Period:
Posted 2 months ago
4.0 - 6.0 years
8 - 13 Lacs
Pune
Work from Office
Horizon Job: ETL Developer OR DWH/BI Developer Job Seniority: Advanced (4-6 years) OR Experienced (3-4 yrs) Location : Magarpatta City ,Pune (Hybrid ) Unit : Amdocs Data and Intelligence Technical Skills: All mandatory exp must be in the resume in the roles and responsibilities Mandatory Working experience in Azure Databricks/ Pyspark. Expert knowledge in Oracle/SQL - Ability to write complex SQL/PL-SQL & performance tune. Have 2+ Year experience in Snowflake Have 2+years of hands-on experience in Spark or DataBricks to build data pipelines. Strong experience on Cloud technologies Have 1+years of hands-on experience in Development, Performance Tuning and loading into Snowflake. Experience of working with Azure Repos or Github. Have 1+years of hands-on experience of working with Azure DevOps or GitHub or any other DevOps tool. Hands on in Unix & advanced Unix Shell Scripting . Open to work in shift. NP: immediate to 1 m Excellent Communication Skills This is C2H Opportunity Interested Candidate Share Resume a t dipti.bhaisare@in.experis.com
Posted 2 months ago
4.0 - 9.0 years
10 - 20 Lacs
Hyderabad, Pune, Gurugram
Work from Office
Job Description About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Azure Data Engineer. Experience: 4+ Years Skill Set: Azure Synapse, Pyspark, ADF and SQL. Location: Pune, Hyderabad, Gurgaon 5+ years of experience in software development, technical operations, and running large-scale applications. 4+ years of experience in developing or supporting Azure Data Factory (API/APIM), Azure Databricks, Azure DevOps, Azure Data Lake storage (ADLS), SQL and Synapse data warehouse, Azure Cosmos DB 2+ years of experience working in Data Engineering Any experience in data virtualization products like Denodo is desirable Azure Data Engineer or Solutions Architect certification is desirable Should have a good understanding of container platforms like Docker and Kubernetes. Should be able to assess the application/platform time to time for architectural improvements and provide inputs to the relevant teams Very Good troubleshooting skills (quick identification of the application issues and providing quick resolutions with no or minimal user/business impact) Hands-on experience in working with high-volume, mission-critical applications Deep appreciation of IT tools, techniques, systems, and solutions. Excellent communication skills along with experience in driving triage calls which involves different technical stake holders Has creative problem-solving skills related to cross-functional issues amidst the changing priorities. Should be flexible and resourceful to swiftly manage the changing operational goals and demands. Good experience in handling escalations and take complete responsibility and ownership of all critical issues to get a technical/logical closure. Good understanding of the IT Infrastructure Library (ITIL) framework and various IT Service Management (ITSM) tools available in the marketplace
Posted 2 months ago
7.0 - 12.0 years
7 - 12 Lacs
Hyderabad
Work from Office
We are seeking a highly skilled Azure Databricks Engineering Lead to design, develop, and optimize data pipelines using Azure Databricks. The ideal candidate will have deep expertise in data engineering, cloud-based data processing, and ETL workflows to support business intelligence and analytics initiatives. Primary Responsibilities Design, develop, and implement scalable data pipelines using Azure Databricks Develop PySpark-based data transformations and integrate structured and unstructured data from various sources Optimize Databricks clusters for performance, scalability, and cost-efficiency within the Azure ecosystem Monitor, troubleshoot, and resolve performance bottlenecks in Databricks workloads Manage orchestration and scheduling of end to end data pipeline using tool like Apache airflow, ADF scheduling, logic apps Effective collaboration with Architecture team in designing solutions and with product owners with validating the implementations Implementing best practices to enable data quality, monitoring, logging and alerting the failure scenarios and exception handling Documenting step by step process to trouble shoot the potential issues and deliver cost optimized cloud solutions Provide technical leadership, mentorship, and best practices for junior data engineers Stay up to date with Azure and Databricks advancements to continuously improve data engineering capabilities. Required Qualifications Overall 7+ years of experience in IT industry and 6+ years of experience in data engineering with at least 3 years of hands-on experience in Azure Databricks Experience with CI/CD pipelines for data engineering solutions (Azure DevOps, Git) Hands-on experience with Delta Lake, Lakehouse architecture, and data versioning Solid expertise in the Azure ecosystem, including Azure Synapse, Azure SQL, ADLS, and Azure Functions Proficiency in PySpark, Python and SQL for data processing in Databricks Deep understanding of data warehousing, data modeling (Kimball/Inmon), and big data processing Solid knowledge of performance tuning, partitioning, caching, and cost optimization in Databricks Proven excellent written and verbal communication skills Proven excellent problem-solving skills and ability to work independently Balance multiple and competing priorities and execute accordingly Proven highly self-motivated with excellent interpersonal and collaborative skills Ability to anticipate risks and obstacles and develop plans for mitigation Proven excellent documentation experience and skills Preferred Qualifications Azure certifications DP-203, AZ-304 etc. Experience on infrastructure as code, scheduling as code, and automating operational activities using Terraform scripts.
Posted 2 months ago
4.0 - 9.0 years
6 - 10 Lacs
Hyderabad
Work from Office
- Building and operationalizing large scale enterprise data solutions and applications using one or more of AZURE data and analytics services in combination with custom solutions - Azure Synapse/Azure SQL DWH, Azure Data Lake, Azure Blob Storage, Spark, HDInsights, Databricks, CosmosDB, EventHub/IOTHub. - Experience in migrating on-premise data warehouses to data platforms on AZURE cloud. - Designing and implementing data engineering, ingestion, and transformation functions - Azure Synapse or Azure SQL data warehouse - Spark on Azure is available in HD insights and data bricks - Good customer communication. - Good Analytical skill
Posted 2 months ago
5.0 - 8.0 years
12 - 20 Lacs
Pune
Work from Office
We are seeking a seasoned Azure DevOps Engineer to join our dynamic team. In this role, you will play a pivotal role in designing, developing, and maintaining our robust infrastructure on Azure using Terraform. As a Senior Azure DevOps Engineer, you will possess a deep understanding of Azure services and technologies, including Azure DevOps, Azure Repos, Azure Pipelines, Azure Boards, and Azure Test Plans. You will also have extensive experience with PowerShell, GitHub, CI/CD practices, and Azure administration. In this role, you will be responsible for building and maintaining scalable, reliable, and high-performance CI/CD pipelines that seamlessly integrate with our existing ecosystem. You will also be responsible for Azure administration, including managing resources, implementing security measures, and ensuring high availability and disaster recovery. You will collaborate closely with developers, data professionals, and business users to ensure that our infrastructure is accessible, secure, and readily available for development, testing, and deployment. Responsibilities: Use tools like Terraform to define and deploy infrastructure using code, including provisioning MS Azure resources using terraform, configuring security settings, and applying updates. Create and implement effective DevOps strategies, planning how development, testing, and deployment processes will be streamlined using Azure DevOps tools. Collaborate with development teams to establish efficient workflows, including setting up version control (e.g., Azure Repos or Git) and ensuring code quality through continuous integration (CI) practices. As a Devops engineer design and manage CI/CD pipelines that automatically build, test, and validate code changes, ensuring seamless integration into the application. As an Azure administrator create and manage Azure resources (virtual machines, storage accounts, networks, databases) by setting up resource groups, defining access controls, and ensuring proper configuration. Ensuring the security and compliance of the Azure environment, including managing access controls and implementing security measures. Monitoring and optimizing Azure resources for maximum performance and cost-effectiveness. Troubleshooting and resolving platform issues in the Azure environment. Snowflake admin knowledge is an add on. Create and manage Snowflake warehouses, monitor user activity, audit logs, define and manage roles (such as ACCOUNTADMIN, SYSADMIN), optimize query execution plans, analyze bottlenecks, and fine-tune SQL queries. Qualifications: Bachelor's Degree or equivalent from an accredited institution. Minimum of 3 years of Devops experience and Familiarity with CI/CD practices, Terraform, and experience with tools like Jenkins or Azure DevOps. Minimum 6-8 years of experience and good experience with Azure administration tasks, including resource provisioning, security configuration, and updates Good to have experience with Azure data engineering using Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure SQL Database Good to have Snowflake administration experience Position Criteria: Leadership: A senior Devops engineer, must be able to lead and mentor team members wherever required. Problem-solving: Must be able to analyze/debug complex Devops and administration problems and develop innovative solutions. Customer service: Must be able to understand customer needs and work to deliver solutions that meet those needs. Creativity: Think outside the box and come up with new and innovative solutions to data problems. Analytical skills: Must be able to analyze large sets of devops requirements, identify patterns and insights, and use that information to inform decision-making. Minimum of 5 years of Devops experience and Familiarity with CI/CD practices, Terraform, and experience with tools like Jenkins or Azure DevOps. Minimum 6-8 years of experience and good experience with Azure administration tasks, including resource provisioning, security configuration, and updates Good to have experience with Azure data engineering using Azure Data Factory, Azure Databricks, Azure Synapse Analytics, and Azure SQL Database
Posted 2 months ago
2.0 - 5.0 years
3 - 12 Lacs
Kolkata, Pune, Bengaluru
Work from Office
Company Name: Tech Mahindra Experience: 2-5 Years Location: Bangalore/Hyderabad Interview Mode: Virtual Interview Rounds: 2-3 Rounds Notice Period: Immediate to 30 days Generic description: Roles and Responsibilities : Design, develop, and maintain large-scale data pipelines using Azure Data Factory (ADF) to extract, transform, and load data from various sources into Azure Databricks. Collaborate with cross-functional teams to understand business requirements and design scalable solutions for big data processing using PySpark on Azure Data Lake Storage. Develop complex SQL queries to optimize database performance and troubleshoot issues in real-time. Ensure high availability of the system by implementing monitoring tools and performing regular maintenance tasks. Job Requirements : 2-5 years of experience in designing and developing large-scale data systems on Microsoft Azure platform. Strong understanding of Azure Data Factory (ADF), Azure Databricks, and Azure Data Lake Storage concepts. Proficiency in writing efficient Python code using PySpark for big data processing.
Posted 2 months ago
5.0 - 7.0 years
5 - 16 Lacs
Hyderabad, Bengaluru
Work from Office
Company Name: Tech Mahindra Experience: 5-7 Years Location: Bangalore/Hyderabad Interview Mode: Virtual Interview Rounds: 2-3 Rounds Notice Period: Immediate to 30 days Generic description: Roles and Responsibilities : Design, develop, test, deploy and maintain large-scale data pipelines using Azure Data Factory (ADF) to integrate various data sources into a centralized data lake. Collaborate with cross-functional teams to gather requirements for data processing needs and design solutions that meet business objectives. Develop complex SQL queries to extract insights from large datasets stored in Azure Databricks or other relational databases. Troubleshoot issues related to ADF pipeline failures, data quality problems, and performance optimization. Job Requirements : 5-7 years of experience in designing and developing large-scale data pipelines using ADF. Strong understanding of Azure Databricks, including its architecture, features, and best practices. Proficiency in writing complex SQL queries for querying large datasets stored in relational databases. Experience working with PySpark on AWS EMR clusters.
Posted 2 months ago
5.0 - 7.0 years
12 - 18 Lacs
Pune, Gurugram, Bengaluru
Hybrid
Job Description: Azure Data Engineer Work Location: Hybrid Gurugram / Pune / Bangalore Experience: 5 to 8 years Apply now: aditya.rao@estrel.ai Include: Resume | CTC | ECTC | Notice (Only Immediate Joiners considere d) | LinkedIn URL Key Responsibilities: - Design, build, and maintain scalable data pipelines and solutions using Azure Data Engineering tools. - Work with large-scale datasets and develop efficient data processing architectures. - Collaborate with cross-functional teams to understand data requirements and translate them into technical solutions. - Implement data governance, security, and quality frameworks as part of the solution architecture. Technical Skills Required: - 4+ years of hands-on experience with Azure Data Engineering tools such as: - Event Hub, Azure Data Factory, Cosmos DB, Synapse, Azure SQL Database, Databricks, and Azure Data Explorer. - 3+ years of experience working with Python / PySpark, Spark, Scala, Hive, and Impala. - Strong SQL and coding skills. - Familiarity with additional Azure services like Azure Data Lake Analytics, U-SQL, and Azure SQL Data Warehouse. - Solid understanding of Modern Data Warehouse architectures, Lambda architecture, and data warehousing principles. Other Requirements: - Proficiency in scripting languages (e.g., Shell). - Strong analytical and organizational abilities. - Ability to work effectively both independently and in a team environment. - Experience working in Agile delivery models. - Awareness of software development best practices. - Excellent written and verbal communication skills. - Azure Data Engineer certification is a plus.
Posted 2 months ago
4.0 - 8.0 years
4 - 8 Lacs
Hyderabad, Bengaluru
Work from Office
- Minimum 4 years of experience in relevant field. - Hands on experience in Databricks, SQL, Azure Data Factory, Azure DevOps - Strong expertise in Microsoft Azure cloud platform services (Azure Data Factory, Azure Data Bricks, Azure SQL Database, Azure Data Lake Storage, Azure Synapse Analytics). - Proficient in CI-CD pipelines in Azure DevOps for automatic deployments - Good in Performance optimization techniques like using temp tables, CTE, indexing, merge statements, joins. - Familiarity in Advanced SQL and programming skills (e.g., Python, Pyspark). - Familiarity with data warehousing and data modelling concepts. - Good in Data management and deployment processes using Azure Data factory and Databricks, Azure DevOps. - Knowledge on integrating every azure service with DevOps - Experience in designing and implementing scalable data architectures. - Proficient in ETL processes and tools. - Strong communication and collaboration skills. - Certifications in relevant Azure technologies are a plus
Posted 2 months ago
10.0 - 15.0 years
15 - 30 Lacs
Pallavaram
Work from Office
Data Engineering Lead Company Name: Blackstraw.ai Oce Location: Chennai (Work from Office) Job Type: Full-time Experience: 10 - 15 Years Candidates who can join immediately will be preferred. Job Description: As a lead data engineer you will oversee data architecture, ETL processes, and analytics pipelines, ensuring efficiency, scalability, and quality. Key Responsibilities: Working with clients to understand their data. Based on the understanding you will be building the data structures and pipelines. You will be working on the application from end to end collaborating with UI and other development teams. You will be working with various cloud providers such as Azure & AWS. You will be engineering data using the Hadoop/Spark ecosystem. You will be responsible for designing, building, optimizing and supporting new and existing data pipelines. Orchestrating jobs using various tools such Oozie, Airflow, etc. Developing programs for cleaning and processing data. You will be responsible for building the data pipelines to migrate and load the data into the HDFS either on-prem or in the cloud. Developing Data ingestion/process/integration pipelines effectively. Creating Hive data structures,metadata and loading the data into data lakes / BigData warehouse environments. Optimized (Performance tuning) many data pipelines effectively to minimize cost. Code versioning control and git repository is up to date. You should be able to explain the data pipeline to internal and external stakeholders. You will be responsible for building and maintaining CI/CD of the data pipelines. You will be managing the unit testing of all data pipelines. Tech Stack: Minimum of 5+ years working experience with Spark, Hadoop eco systems. Minimum of 4+ years working experience on designing data streaming pipelines. Should be an expert in either Python/Scala/Java. Should have experience in Data Ingestion and Integration into data lake using hadoop ecosystem tools such as Sqoop, Spark, SQL, Hive, Airflow, etc.. Should have experience optimizing (Performance tuning) data pipelines. Should have minimum experience of 3+ years on NoSQL and Spark Streaming. Knowledge of Kubernetes and Docker is a plus. Should have experience with Cloud services either Azure/AWS. Should have experience with on-prem distribution such as Cloudera/HortonWorks/MapR. Basic understanding of CI/CD pipelines. Basic knowledge of Linux environment and commands. Preferred Qualifications: Bachelors degree in computer science or related field. Proven experience with big data ecosystem tools such as Sqoop, Spark, SQL, API, Hive, Oozie, Airflow, etc.. Solid experience in all phases of SDLC with 10+ years of experience (plan, design, develop, test, release, maintain and support) Hands-on experience using Azures data engineering stack. Should have implemented projects using programming languages such as Scala or Python. Working experience on SQL complex data merging techniques such as windowing functions etc.. Hands-on experience with on-prem distribution tools such as Cloudera/HortonWorks/MapR. Should have excellent communication, presentation and problem solving skills. Key Traits: Should have excellent communication skills. Should be self motivated and willing to work as part of a team. Should be able to collaborate and coordinate with on shore and offshore teams. Be a problem solver and be proactive to solve the challenges that come his way.
Posted 2 months ago
12.0 - 18.0 years
50 - 80 Lacs
Hyderabad
Work from Office
Executive Director Data Management Company Overview Accordion is a global private equity-focused financial consulting firm specializing in driving value creation through services rooted in Data & Analytics and powered by technology. Accordion works at the intersection of Private Equity sponsors and portfolio companies management teams across every stage of the investment lifecycle. We provide hands-on, execution-oriented support, driving value through the office of the CFO by building data and analytics capabilities and identifying and implementing strategic work, rooted in data and analytics. Accordion is headquartered in New York City with 10 offices worldwide. Join us and make your mark on our company. Data & Analytics (Accordion | Data & Analytics) Accordion's Data & Analytics (D&A) practice in India delivers cutting-edge, intelligent solutions to a global clientele, leveraging a blend of domain knowledge, sophisticated technology tools, and deep analytics capabilities to tackle complex business challenges. We partner with Private Equity clients and their Portfolio Companies across diverse sectors, including Retail, CPG, Healthcare, Media & Entertainment, Technology, and Logistics. D&A team members deliver data and analytical solutions designed to streamline reporting capabilities and enhance business insights across vast and complex data sets ranging from Sales, Operations, Marketing, Pricing, Customer Strategies, and more. Working at Accordion in India means joining 800+ analytics, data science, finance, and technology experts in a high-growth, agile, and entrepreneurial environment to transform how portfolio companies drive value. It also means making your mark on Accordion's future by embracing a culture rooted in collaboration and a firm-wide commitment to building something great, together. Join us and experience a better way to work! Location: Hyderabad, Telangana Role Overview: Accordion is looking for an experienced Enterprise Data Architect to lead the strategy, design, and implementation of data architectures for across all its data management projects. He/she will be part of the technology team and will possess in-depth knowledge of distinct types of data architectures and frameworks including distributed large-scale implementations. He/she will collaborate closely with the client partnership team to design and recommend robust and scalable data architecture to clients and work with engineering teams to implement the same in on-premises or cloud-based environments. He/she will be a data evangelist and will conduct knowledge sharing sessions in the company on various data management topics to spread awareness of data architecture principles and improve the overall capabilities of the team. The Enterprise Data Architect will also conduct design review sessions to validate/verify implementations, emphasize and implement best practices followed by exhaustive documentation which are in line with the design philosophy. He/she will have excellent communication skills and will possess industry standard certification in the data architecture areas. What You will do: Partner with clients to understand their business and create comprehensive requirements to enable development of optimal data architecture. Translate business requirements into logical and physical design of databases, data warehouses, and data streams. Analyze, plan, and define data architecture framework, including security, reference data, metadata, and master data. Create elaborate data management processes and procedures and consult with Senior Management to share the knowledge. Collaborate with client and internal project teams to devise and implement data strategies, build models, and assess shareholder needs and goals. Develop application programming interfaces (APIs) to extract and store data in the most optimal manner. Align business requirements with technical architecture and collaborate with the technical teams for implementation and tracking purposes. Research and track the latest developments in the field to maintain expertise about the latest best practices and techniques within the industry. Ideally, you have: Undergraduate degree (B.E/B.Tech.) from tier-1/tier-2 colleges are preferred. 12+ years of experience in related field. Experience in designing logical & physical data design architectures in various RDBMS (SQL Server, Oracle, MySQL etc.), Non-RDBMS (MongoDB, Cassandra etc.) and Data Warehouse (Azure Synapse, AWS Redshift, Google BigQuery, Snowflake etc.) environments. Deep knowledge and implementation experience on Modern Data Warehouse principles using Kimball & Inmon Models or Data Vault including their application based on data quality requirements. In-depth knowledge of any one of cloud-based infrastructure (AWS, Azure, Google Cloud) for solution design, development, and delivery is mandatory. Proven abilities to take on initiative, be innovative and drive it through completion. Analytical mind with strong problem-solving attitude. Excellent communication skills, both written and verbal. Any Enterprise Data Architect certification will be an added advantage. Why Explore a Career at Accordion: High growth environment: Semi-annual performance management and promotion cycles coupled with a strong meritocratic culture, enables fast track to leadership responsibility. Cross Domain Exposure: Interesting and challenging work streams across industries and domains that always keep you excited, motivated, and on your toes. Entrepreneurial Environment: Intellectual freedom to make decisions and own them. We expect you to spread your wings and assume larger responsibilities. Fun culture and peer group: Non-bureaucratic and fun working environment; Strong peer environment that will challenge you and accelerate your learning curve. Other benefits for full time employees: Health and wellness programs that include employee health insurance covering immediate family members and parents, term life insurance for employees, free health camps for employees, discounted health services (including vision, dental) for employee and family members, free doctor's consultations, counsellors, etc. Corporate Meal card options for ease of use and tax benefits. Team lunches, company sponsored team outings and celebrations. Robust leave policy to support work-life balance. Specially designed leave structure to support woman employees for maternity and related requests. Reward and recognition platform to celebrate professional and personal milestones. A positive & transparent work environment including various employee engagement and employee benefit initiatives to support personal and professional learning and development.
Posted 2 months ago
3.0 - 8.0 years
3 - 7 Lacs
Hyderabad
Work from Office
Azure Data Factory: - Develop Azure Data Factory Objects - ADF pipeline, configuration, parameters, variables, Integration services runtime - Hands-on knowledge of ADF activities(such as Copy, SP, lkp etc) and DataFlows - ADF data Ingestion and Integration with other services Azure Databricks: - Experience in Big Data components such as Kafka, Spark SQL, Dataframes, HIVE DB etc implemented using Azure Data Bricks would be preferred. - Azure Databricks integration with other services - Read and write data in Azure Databricks - Best practices in Azure Databricks Synapse Analytics: - Import data into Azure Synapse Analytics with and without using PolyBase - Implement a Data Warehouse with Azure Synapse Analytics - Query data in Azure Synapse Analytics
Posted 2 months ago
10.0 - 15.0 years
16 - 30 Lacs
Bengaluru
Hybrid
Role & responsibilities As a Data Scientist, you will play a crucial role in analyzing complex data using statistical and machine learning models, providing valuable insights, and driving data-driven decision-making processes Analyze complex data using statistical and machine learning models to derive actionable insights Utilize Python for data analysis, visualization, and working with APIs, Linux OS, databases, big data technologies, and cloud services Develop innovative solutions for natural language processing and generative modeling tasks using NLP, Generative AI, and LLMs Collaborate with cross-functional teams to understand business requirements and translate them into data science solutions Work in an Agile framework, participating in sprint planning, daily stand-ups, and retrospectives Collaborate with cross-functional teams to understand business needs and translate them into analytical solutions. Research, develop and analyze computer vision algorithms in areas related to object detection, tracking, product identification and verification, scene understanding Ensure model robustness, model generalization, accuracy, testability, and efficiency Write product or system development code Design and maintain data pipelines and workflows within Azure Databricks for optimal performance and scalability. Communicate findings and insights effectively to stakeholders reports and visualizations. Preferred candidate profile A Master's degree in Data Science, Statistics, Computer Science, or a related field. Over 5 years of proven experience in developing machine learning models, particularly for time series data within a financial context. Advanced programming skills in Python or R, with extensive experience in libraries such as Pandas, NumPy, and Scikit-learn. Comprehensive knowledge of AI and LLM technologies, with a track record of developing applications and models. Proficiency in data visualization tools, such as Tableau, Power BI, or similar platforms. Exceptional analytical and problem-solving abilities, coupled with meticulous attention to detail. Superior communication skills, enabling the clear and concise presentation of complex findings. Extensive experience in Azure Databricks for data processing, model training, and deployment. Proficiency with Azure Data Lake and Azure SQL Database for data storage and management. Experience with Azure Machine Learning for model deployment and monitoring. In-depth understanding of Azure services and tools for data integration and orchestration.
Posted 2 months ago
5.0 - 10.0 years
12 - 17 Lacs
Hyderabad
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together The ETL Developer is responsible for the design, development and maintenance of various ETL processes. This includes the design and development of processes for various types of data, potentially large datasets and disparate data sources that require transformation and cleansing to become a usable data set. This candidate should also be able to find creative solutions to complex and diverse business requirements. The developer should have a solid working knowledge of any programing languages, data analysis, design, ETL tool sets. The ideal candidate must possess solid background on Data Engineering development technologies. The candidate must possess excellent written and verbal communication skills with the ability to collaborate effectively with business and technical experts in the team. Primary Responsibility Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Graduate degree or equivalent experience 6+ years of development, administration and migration experience in Azure Databricks and Snowflake 6+ years of experience with data design/pattern- Data warehousing, Dimensional Modeling and Lakehouse Medallion Architecture 5+ years of experience working with Azure data factory 5+ years of experience in setting up, maintenance and usage of Azure services as Azure Data Factory, Azure Databricks, Azure Data Lake Storage, Azure SQL Database etc. 5+ years of experience working with Python and Pyspark 3+ years of experience with Kafka Excellent communication skills to effectively convey technical concepts to both technical and non-technical stakeholders At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 months ago
3.0 - 7.0 years
7 - 12 Lacs
Hyderabad
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design, develop, and implement BI applications using Microsoft Azure, including Azure SQL Database, Azure Data Lake Storage, Azure Databricks, and Azure Blob Storage Manage the entire software development life cycle, encompassing requirements gathering, designing, coding, testing, deployment, and support Collaborate with cross-functional teams to define, design, and release new features Utilize CI/CD pipelines to automate deployment using Azure and DevOps tools Monitor application performance, identify bottlenecks, and devise solutions to address these issues Foster a positive team environment and skill development Write clean, maintainable, and efficient code that adheres to company standards and best practices Participate in code reviews to ensure code quality and share knowledge Troubleshoot complex software issues and provide timely solutions Engage in Agile/Scrum development processes and meetings Stay updated with the latest and emerging technologies in software development and incorporate new technologies into solution design as appropriate Proactively identify areas for improvement or enhancement in current architecture Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelor’s or Master’s degree in CS, IT, or a related field with 4+ years of experience in software development 3+ years of experience in writing advance level SQL and PySpark Code 3+ years of experience in Azure Databricks and Azure SQL 3+ years of experience in Azure (ADF) Knowledge on advance SQL, ETL & Visualization tools along with knowledge on data warehouse concepts Proficient in building enterprise-level data warehouse projects using Azure Databricks and ADF Proficient in code versioning tools GitHub Proven excellent understanding of Agile methodologies Proven solid problem-solving skills with the ability to work independently and manage multiple tasks simultaneously Proven excellent interpersonal, written, and verbal communication skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 2 months ago
5.0 - 9.0 years
11 - 15 Lacs
Hyderabad
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. As aSenior Data Engineerat Optum you’ll help us work on streamlining the flow of information and deliver insights to manage our various Data Analytics web applications which serve internal and external customers. This specific team is working on features such as OpenAI API integrations, working with customers to integrate disparate data sources into useable datasets, and configuring databases for our web application needs. Your work will contribute to lowering the overall cost of healthcare for our consumers and helping people live healthier lives. Primary Responsibilities Data Pipeline DevelopmentDevelop and maintain data pipelines that extract, transform, and load (ETL) data from various sources into a centralized data storage system, such as a data warehouse or data lake. Ensure the smooth flow of data from source systems to destination systems while adhering to data quality and integrity standards Data IntegrationIntegrate data from multiple sources and systems, including databases, APIs, log files, streaming platforms, and external data providers. Handle data ingestion, transformation, and consolidation to create a unified and reliable data foundation for analysis and reporting Data Transformation and ProcessingDevelop data transformation routines to clean, normalize, and aggregate data. Apply data processing techniques to handle complex data structures, handle missing or inconsistent data, and prepare the data for analysis, reporting, or machine learning tasks Maintain and enhance existing application databases to support our many Data Analytic web applications, as well as working with our web developers on new requirements and applications Contribute to common frameworks and best practices in code development, deployment, and automation/orchestration of data pipelines Implement data governance in line with company standards Partner with Data Analytics and Product leaders to design best practices and standards for developing productional analytic pipelines Partner with Infrastructure leaders on architecture approaches to advance the data and analytics platform, including exploring new tools and techniques that leverage the cloud environment (Azure, Snowflake, others) Monitoring and SupportMonitor data pipelines and data systems to detect and resolve issues promptly. Develop monitoring tools, alerts, and automated error handling mechanisms to ensure data integrity and system reliability Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so You will be rewarded and recognized for your performance in an environment that will challenge you and give you clear direction on what it takes to succeed in your role, as well as providing development for other roles you may be interested in. Required Qualifications Extensive hands-on experience in developing data pipelines that demonstrate a solid understanding of software engineering principles Proficiency in Python, in fulfilling multiple general-purpose use-cases, and not limited to developing data APIs and pipelines Solid understanding of software engineering principles (micro-services applications and ecosystems) Fluent in SQL (Snowflake/SQL Server), with experience using Window functions and more advanced features Understanding of DevOps tools, Git workflow and building CI/CD pipelines Solid understanding of Airflow Proficiency in design and implementation of pipelines and stored procedures in SQL Server and Snowflake Demonstrated ability to work with business and technical audiences on business requirement meetings, technical white boarding exercises, and SQL coding or debugging sessions Preferred Qualifications Bachelor’s Degreeor higherinDatabase Management, Information Technology, Computer Science or similar Experience with Azure Data Factory or Apache Airflow Experience with Azure Databricks or Snowflake Experience working in projects with agile/scrum methodologies Experience with shell scripting languages Experience working with Apache Kafka, building appropriate producer or consumer apps Experience with production quality ML and/or AI model development and deployment Experience working with Kubernetes and Docker, and knowledgeable about cloud infrastructure automation and management (e.g., Terraform) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 2 months ago
3.0 - 8.0 years
8 - 13 Lacs
Noida
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Cloud Migration Planning and Execution: Assist in developing and implementing strategies for migrating ETL processes to cloud platforms like Azure Participate in assessing the current infrastructure and creating a detailed migration roadmap ETL Development and Optimization: Design, develop, and optimize DataStage ETL jobs for cloud environments Ensure data integrity and performance during the migration process Unix Scripting and Automation: Utilize Unix shell scripting to automate data processing tasks and manage ETL workflows Implement and maintain scripts for data extraction, transformation, and loading Collaboration and Coordination: Work closely with cloud architects, senior data engineers, and other stakeholders to ensure seamless integration and migration Coordinate with IT security teams to ensure compliance with data privacy and security regulations Technical Support and Troubleshooting: Provide technical support during and after the migration to resolve any issues Conduct testing and validation to ensure the accuracy and performance of migrated data Documentation and Training: Maintain comprehensive documentation of the migration process, including data mappings, ETL workflows, and system configurations Assist in training team members and end-users on new cloud-based ETL processes and tools Performance Monitoring and Optimization: Monitor the performance of ETL processes in the cloud and make necessary adjustments to optimize efficiency Implement best practices for cloud resource management and cost optimization Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Engineering graduate or equivalent experience 3+ years of relevant Datastage development experience 2+ years experience in development/coding on Spark/Scala or Python or Pyspark 1+ years of experience working on Microsoft Azure Databricks Relevant experience on Databases like Teradata, Snowflake Hands-on development experience in UNIX scripting Experience in working on data warehousing projects Experience with Test Driven Development and Agile methodologies Sound knowledge of SQL programming and SQL Query Skills Proven ability to apply the knowledge of principles and techniques to solve technical problems and write code based on technical design Proficient in learning & adopting new technologies and use them to execute the use cases for business problem solving Exposure to job schedulers like Airflow and ability to create and modify DAGs Proven solid Communication skills (written and Verbal) Proven ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Proven exposure on DevOps methodology and creating CI/CD deployment pipeline Proven excellent Analytical and Communication skills (Both Verbal and Written) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone-of every race, gender, sexuality, age, location and income-deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 2 months ago
2.0 - 5.0 years
6 - 10 Lacs
Hyderabad
Work from Office
Primary Responsibility: Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so. Qualifications - External Required Qualifications: Graduate degree or equivalent experience 8+ years of development, administration and migration experience in Azure Databricks and Snowflake 8+ years of experience with data design/pattern- Data warehousing, Dimensional Modeling and Lakehouse Medallion Architecture 5+ years of experience working with Azure data factory 5+ years of experience in setting up, maintenance and usage of Azure services as Azure Data Factory, Azure Databricks, Azure Data Lake Storage, Azure SQL Database etc. 5+ years of experience working with Python and Pyspark 3+ years of experience with Kafka Excellent communication skills to effectively convey technical concepts to both technical and non-technical stakeholders.
Posted 2 months ago
4.0 - 8.0 years
8 - 13 Lacs
Hyderabad
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. Positions in this function are responsible for the management and manipulation of mostly structured data, with a focus on building business intelligence tools, conducting analysis to distinguish patterns and recognize trends, performing normalization operations and assuring data quality. Depending on the specific role and business line, example responsibilities in this function could include creating specifications to bring data into a common structure, creating product specifications and models, developing data solutions to support analyses, performing analysis, interpreting results, developing actionable insights and presenting recommendations for use across the company. Roles in this function could partner with stakeholders to understand data requirements and develop tools and models such as segmentation, dashboards, data visualizations, decision aids and business case analysis to support the organization. Other roles involved could include producing and managing the delivery of activity and value analytics to external stakeholders and clients. Team members will typically use business intelligence, data visualization, query, analytic and statistical software to build solutions, perform analysis and interpret data. Positions in this function work on predominately descriptive and regression-based analytics and tend to leverage subject matter expert views in the design of their analytics and algorithms. This function is not intended for employees performing the following workproduction of standard or self-service operational reporting, casual inference led (healthcare analytics) or data pattern recognition (data science) analysis; and/or image or unstructured data analysis using sophisticated theoretical frameworks. Generally work is self-directed and not prescribed. Primary Responsibilities Analyze business requirements & functional specifications Be able to determine the impact of changes in current functionality of the system Be able to handle SAS Algo dev assignments independently, end-to-end SDLC Interaction with diverse Business Partners and Technical Workgroups Drive Algo optimization and innovation in the team Be able to be flexible to collaborate with onshore business, during US business hours Be able to be flexible to support project releases, during US business hours Adherence to the defined delivery process/guidelines Drive project quality process compliance Works with less structured, more complex issues Serves as a resource to others Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Bachelors degree or equivalent experience 7+ years of working experience in Python, Pyspark, Scala 5+ years working experience in Azure Databricks 3+ years of experience working on MS SQL Server and NoSQL DBs like Cassandra, etc. Hands-on experience in the Streaming application (Kafka, Spark Streaming, etc.) Solid healthcare domain knowledge Exposure to following DevOps methodology and creating CI/CD deployment pipeline Exposure to following Agile methodology specifically using tools like Rally Proven ability to understand the existing application codebase, perform impact analysis and update the code when required based on the business logic or for optimization Proven excellent analytical and communication skills (Both Verbal and Written)
Posted 2 months ago
3.0 - 7.0 years
11 - 15 Lacs
Gurugram
Work from Office
Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health optimization on a global scale. Join us to start Caring. Connecting. Growing together. Primary Responsibilities Design, develop, and maintain scalable data/code pipelines using Azure Databricks, Apache Spark, and Scala Collaborate with data engineers, data scientists, and business stakeholders to understand data requirements and deliver high-quality data solutions Optimize and tune Spark applications for performance and scalability Implement data processing workflows, ETL processes, and data integration solutions Ensure data quality, integrity, and security throughout the data lifecycle Troubleshoot and resolve issues related to data processing and pipeline failures Stay updated with the latest industry trends and best practices in big data technologies Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Required Qualifications Undergraduate degree or equivalent experience 6+ years Proven experience with Azure Databricks, Apache Spark, and Scala 6+ years experience with Microsoft Azure Experience with data warehousing solutions and ETL tools Solid understanding of distributed computing principles and big data processing Proficiency in writing complex SQL queries and working with relational databases Proven excellent problem-solving skills and attention to detail Solid communication and collaboration skills At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone - of every race, gender, sexuality, age, location and income - deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes - an enterprise priority reflected in our mission.
Posted 2 months ago
4.0 - 7.0 years
8 - 15 Lacs
Hyderabad
Hybrid
We are seeking a highly motivated Senior Data Engineer OR Data Engineer within Envoy Global's tech team to join us on a full time, permanent basis. This role is responsible for designing, developing, and documenting data pipelines and ETL jobs to enable data migration, data integration and data warehousing. That includes ETL jobs, reports, dashboards and data pipelines. The person in this role will work closely with Data Architect, BI & Analytics team and Engineering teams to deliver data assets for Data Security, DW and Analytics. As our Senior Data Engineer OR Data Engineer, you will be required to: Design, build, test and maintain cloud-based data pipelines to acquire, profile, cleanse, consolidate, transform, integrate data Design and develop ETL processes for the Data Warehouse lifecycle (staging of data, ODS data integration, EDW and data marts) and Data Security (Data archival, Data obfuscation, etc.). Build complex SQL queries on large datasets and performance tune as needed Design and develop data pipelines and ETL jobs using SSIS and Azure Data Factory Maintain ETL packages and supporting data objects for our growing BI infrastructure Carry out monitoring, tuning, and database performance analysis Facilitate integration of our application with other systems by developing data pipelines Prepare key documentation to support the technical design in technical specifications Collaborate and work alongside with other technical professionals (BI Report developers, Data Analysts, Architect) Communicate clearly and effectively with stakeholders To apply for this role, you should possess the following skills, experience and qualifications: Design, Develop, and Document Data Pipelines and ETL Jobs: Create and maintain robust data pipelines and ETL (Extract, Transform, Load) processes to support data migration, integration, and warehousing. Data Assets Delivery: Collaborate with Data Architects, BI & Analytics teams, and Engineering teams to deliver high-quality data assets for data security, data warehousing (DW), and analytics. ETL Jobs, Reports, Dashboards, and Data Pipelines: Develop and manage ETL jobs, generate reports, create dashboards, and ensure the smooth operation of data pipelines. 3+ years of experience as a SSIS ETL developer, Data Engineer or a related role 2+ years of experience using Azure Data Factory Knowledgeable in Data Modelling and Data warehouse concepts Experience working with Azure stack Demonstrated ability to write SQL/TSQL queries to retrieve/modify data Knowledge and know-how to troubleshoot potential issues, and experience with best practices around database operations Ability to work in an Agile environment Should you have a deep passion for technology and a desire to thrive in a rapidly evolving and creative environment, we would be delighted to receive your application. Please provide your updated resume, highlighting your relevant experience and the reasons you believe you would be a valuable member of our team. We look forward to reviewing your subm
Posted 2 months ago
7.0 - 9.0 years
25 - 35 Lacs
Chennai, Bengaluru
Hybrid
Warm Greetings from Dataceria Software Solutions Pvt Ltd We are Looking For: Senior Azure Data Engineer Domain : BFSI ------------------------------------------------------------------------------------------------------------------------------------------------- As a Senior Azure Data Engineer , you will play a pivotal role in bridging data engineering with front-end development. You willll work closely with Data Scientists and UI Developers (React.js) to design, build, and secure data services that power a next-generation platform. This is a hands-on, collaborative role requiring deep experience across the Azure data ecosystem, API development, and modern DevOps practices. Your Responsibilities Will Include: Building and maintaining scalable Azure data pipelines ( ADF, Synapse, Databricks, DBT) to serve dynamic frontend interfaces. Creating API access layers to expose data to front-end applications and external services. Collaborating with the Data Science team to operationalize models and insights. Working directly with React JS developers to support UI data integration. Ensuring data security , integrity , and monitoring across systems. Implementing and maintaining CI/CD pipelines for seamless deployment. Automating and managing cloud infrastructure using Terraform, Kubernetes, and Azure App Services . Supporting data migration initiatives from legacy infrastructure to modern platforms like Data Mesh Refactoring legacy pipelines with code reuse, version control, and infrastructure-as-code best practices. Analyzing, mapping, and documenting financial data models across various systems. What Were Looking For: 8+ years of experience in data engineering, with a strong focus on the Azure ecosystem (ADF, Synapse, Databricks, App Services). Proven ability to develop and host secure, scalable REST APIs . Experience supporting cross-functional teams, especially front-end/UI and data science groups is a plus. Hands-on experience with Terraform, Kubernetes (Azure EKS), CI/CD, and cloud automation. Strong expertise in ETL/ELT design , performance tuning, and pipeline monitoring . Solid command of Python, SQL , and optionally Scala, Java, or PowerShell. Knowledge of data security practices, governance, and compliance (e.g., GDPR) . Familiarity with big data tools (e.g., Spark, Kafka ), version control (Git), and testing frameworks for data pipelines. Excellent communication skills and the ability to explain technical concepts to diverse stakeholders. Role & responsibilities ---------------------------------------------------------------------------------------------------------------------------------------------- Joining: Immediate Work location: Bangalore (hybrid) , Chennai Open Positions: Senior Azure Data Engineer, If interested, please share your updated resume to carrers@dataceria.com: We welcome applications from skilled candidates who are open to working in a hybrid model. Candidates with less experience but strong technical abilities are also encouraged to apply. ----------------------------------------------------------------------------------------------------- Dataceria Software Solutions Pvt Ltd Follow our LinkedIn for more job openings : https://www.linkedin.com/company/dataceria/ Email : careers@dataceria.com
Posted 2 months ago
12.0 - 18.0 years
0 - 1 Lacs
Mumbai, Navi Mumbai, Mumbai (All Areas)
Work from Office
Greetings from 3i Infotech !! PFB JD for Senior Technical Manager - Solutions Architect position -Navi Mumbai/Mumbai We are seeking a highly motivated and experienced Data & AI Leader to join our team. The ideal candidate will be responsible for leading and managing the delivery of multiple projects within the Data & AI domain. This role requires in-depth expertise in Azure data services, as well as the ability to effectively lead a team of data professionals. Key Responsibilities: Lead a team of data engineers, data scientists, and business analysts in the successful execution of Data & AI projects. Own the end-to-end delivery process, ensuring that projects are completed on time and within budget while maintaining high-quality standards. Collaborate with cross-functional teams, including business stakeholders, to gather requirements, define project scope, and set clear objectives. Design robust and scalable data solutions utilizing Power BI , Tableau, and Azure data services. Provide technical guidance and mentorship to team members, fostering a culture of continuous learning and development. Have Project Management Skills to Plan, execute, and close projects, managing timelines, scope, and resources. Lead and coordinate cross-functional teams, facilitating communication and collaboration to achieve project goals. Client Liaison: Act as the primary point of contact for clients, addressing their needs and resolving any issues that arise. Ensure project deliverables, meet quality standards and align with client requirements. Provide regular project updates and status reports to stakeholders and senior management. Stay up to date with industry trends and emerging technologies in the Data & AI space and apply this knowledge to drive innovation within the team. Team Coordination Key Skills: Bachelors degree in computer science, Engineering, or a related field. Proven experience of total 15+ years in Data, BI and Analytics and 5+ years in leading and managing Data & AI projects, with a track record of successful project delivery. Expertise in Azure data fabric and Snowflake. Extensive experience with Azure data services, including but not limited to Azure Data Factory, Azure SQL Database, Azure Databricks, and Azure Synapse Analytics. Strong analytical and problem-solving skills, with the ability to design and implement complex data solutions. Excellent communication and leadership skills, with the ability to effectively collaborate with cross-functional teams. Proven ability to mentor and develop team members, fostering a culture of continuous improvement. Nice to Have: Microsoft Azure Certifications Pls share your resumes on silamkoti.saikiran@3i-infotech.com Pls share below deatils: C.CTC : E.CTC: Notice Period: Note: Looking for candidates who can join us in short notice period and in case if your profile is not suitable request, you to share some references. Regards, Kiran HRBP 3i Infotech
Posted 2 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France