Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
18.0 - 23.0 years
15 - 19 Lacs
Hyderabad
Work from Office
About the Role We are seeking a highly skilled and experienced Data Architect to join our team. The ideal candidate will have at least 18 years of experience in Data engineering and Analytics and a proven track record of designing and implementing complex data solutions. As a senior principal data architect, you will be expected to design, create, deploy, and manage Blackbaud’s data architecture. This role has considerable technical influence within the Data Platform, Data Engineering teams, and the Data Intelligence Center of Excellence at Blackbaud. This individual acts as an evangelist for proper data strategy with other teams at Blackbaud and assists with the technical direction, specifically with data, of other projects. What you'll do Develop and direct the strategy for all aspects of Blackbaud’s Data and Analytics platforms, products and services Set, communicate and facilitate technical direction more broadly for the AI Center of Excellence and collaboratively beyond the Center of Excellence Design and develop breakthrough products, services or technological advancements in the Data Intelligence space that expand our business Work alongside product management to craft technical solutions to solve customer business problems. Own the technical data governance practices and ensures data sovereignty, privacy, security and regulatory compliance. Continuously challenging the status quo of how things have been done in the past. Build data access strategy to securely democratize data and enable research, modelling, machine learning and artificial intelligence work. Help define the tools and pipeline patterns our engineers and data engineers use to transform data and support our analytics practice Work in a cross-functional team to translate business needs into data architecture solutions. Ensure data solutions are built for performance, scalability, and reliability. Mentor junior data architects and team members. Keep current on technologydistributed computing, big data concepts and architecture. Promote internally how data within Blackbaud can help change the world. What you'll bring 18+ years of experience in data and advanced analytics At least 8 years of experience working on data technologies in Azure/AWS Expertise in SQL and Python Expertise in SQL Server, Azure Data Services, and other Microsoft data technologies. Expertise in Databricks, Microsoft Fabric Strong understanding of data modeling, data warehousing, data lakes, data mesh and data products. Experience with machine learning Excellent communication and leadership skills. Preferred Qualifications Experience working with .Net/Java and Microservice Architecture Stay up to date on everything Blackbaud, follow us on Linkedin, X, Instagram, Facebook and YouTube Blackbaud is a digital-first company which embraces a flexible remote or hybrid work culture. Blackbaud supports hiring and career development for all roles from the location you are in today! Blackbaud is proud to be an equal opportunity employer and is committed to maintaining an inclusive work environment. All qualified applicants will receive consideration for employment without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, physical or mental disability, age, or veteran status or any other basis protected by federal, state, or local law.
Posted 3 weeks ago
4.0 - 9.0 years
5 - 15 Lacs
Hyderabad, Chennai
Work from Office
Key skills: Python, SQL, Pyspark, Databricks, AWS (Manadate) Added advantage: Life sciences/Pharma Roles and Responsibilities 1.Data Pipeline Development: Design, build, and maintain scalable data pipelines for ingesting, processing, and transforming large datasets from diverse sources into usable formats. 2.Data Integration and Transformation: Integrate data from multiple sources, ensuring data is accurately transformed and stored in optimal formats (e.g., Delta Lake, Redshift, S3). 3.Performance Optimization: Optimize data processing and storage systems for cost efficiency and high performance, including managing compute resources and cluster configurations. 4.Automation and Workflow Management: Automate data workflows using tools like Airflow, Data bricks APIs, and other orchestration technologies to streamline data ingestion, processing, and reporting tasks. 5.Data Quality and Validation: Implement data quality checks, validation rules, and transformation logic to ensure the accuracy, consistency, and reliability of data. 6.Cloud Platform Management: Manage and optimize cloud infrastructure (AWS, Databricks) for data storage, processing, and compute resources, ensuring seamless data operations. 7.Migration and Upgrades: Lead migrations from legacy data systems to modern cloud-based platforms, ensuring smooth transitions and enhanced scalability. 8.Cost Optimization: Implement strategies for reducing cloud infrastructure costs, such as optimizing resource usage, setting up lifecycle policies, and automating cost alerts. 9.Data Security and Compliance: Ensure secure access to data by implementing IAM roles and policies, adhering to data security best practices, and enforcing compliance with organizational standards. 10.Collaboration and Support: Work closely with data scientists, analysts, and business teams to understand data requirements and provide support for data-related tasks.
Posted 3 weeks ago
10.0 - 16.0 years
40 - 70 Lacs
Bengaluru
Hybrid
Role & responsibilities : Leadership & Strategy : Lead and manage a team of 20 Data Scientists (DS), ensuring high performance and continuous growth Define and implement the companys data science vision and strategy Work closely with global business leaders to identify opportunities where data science can drive business value Stay up to date with emerging trends in AI, ML, and advanced data analytics to drive innovation. Ensure all agreed Key Performance Indicators (KPIs) are met in line with performance agreements, maintaining high standards of operational efficiency Data Science : Develop and deploy machine learning models, statistical algorithms and predictive analytics solutions. Ensure the scalability, reliability and efficiency of data science models and pipelines. Collaborate with engineering and data teams to improve data quality, availability and infrastructure Participate in global meets & workshops to review & develop strong process and strategy for DS services in India Utilize advanced analytics to solve complex business problems and improve decision-making Stakeholder Management : Partner with cross-functional teams (engineering, operations) to translate business needs into data-driven solutions Communicate insights and recommendations to both technical and non-technical stakeholders Develop data-driven strategies to optimize customer experience, revenue and operational efficiency Exhibit very strong local and global stakeholders management skill Support in developing proactive proposals and solutions to internal and external stakeholders. Team Development & Mentorship : Recruit, train and mentor Data Scientists, Data Engineers and AI platform engineers, fostering a culture of innovation. Support HR partner with building hiring strategy for the niched skills Conduct performance evaluations, provide feedback and set career development plans for the team members Encourage best practices in coding, model deployment and data governance. Desired candidate profile : Bachelors or Masters or PhD in Data Science, Computer Science, Statistics, Mathematics or a related field. Technical Skills: 10-15 years of total experience with min 5 yrs in data science, analytics, AI/Deep Learning, NLP and Generative AI . Strong proficiency in Python, R, SQL and cloud platforms ( AWS, GCP, or Azure ) Expertise in Machine Learning frameworks ( TensorFlow, PyTorch, Scikit-learn ) Experience with big data technologies (Spark, Hadoop, Databricks) and data engineering concepts. Solid understanding of statistics, probability and optimization techniques. Experience with MLOps, model deployment and productionizing ML models . Industry experience in Aviation, Automotive, Finance, Healthcare, Retail or any IT Technology based domain. Leadership & Business Acumen : • Proven experience in leading and managing data science teams. • Ability to translate business problems into data science solutions. • Strong communication and stakeholder management skills. • Experience in budget planning, resource allocation and project management. • Proficiency in Risk assessment and mitigation planning.
Posted 3 weeks ago
8.0 - 12.0 years
4 - 24 Lacs
Mumbai, Maharashtra, India
On-site
This role is for one of Weekday's clients Min Experience: 8 years Location: Mumbai JobType: full-time About the Role We are looking for a highly skilled and results-driven Assistant Vice President Data Engineering to join our Data & Analytics leadership team. In this role, you will be responsible for building, optimizing, and scaling data pipelines, architectures, and data processing systems. You will work closely with cross-functional teams to deliver robust, secure, and high-performing data solutions that support business intelligence, advanced analytics, and decision-making across the organization. This is a high-impact leadership position, ideal for someone who brings deep technical expertise in Data Engineering , especially around tools like Databricks , PySpark , and SQL , and can lead enterprise-grade data workflow development in a complex, fast-paced environment. Key Responsibilities Team Leadership & Strategy: Lead a team of data engineers, ensuring best practices in data development, documentation, and collaboration. Contribute to strategic decisions related to data infrastructure, architecture, and tooling. Data Pipeline Development: Design and implement highly scalable and efficient data pipelines for both batch and real-time use cases using Databricks , PySpark , and other big data tools. ETL & Data Workflow Management: Build and manage robust ETL processes that support data transformation, cleansing, enrichment, and ingestion across multiple data sources and platforms. SQL Development: Write optimized, complex SQL queries for large datasets, ensuring data integrity and performance efficiency. Architecture & Integration: Collaborate with data architects to ensure scalable and secure integration of data systems across cloud and on-premise environments. Data Governance & Quality: Work closely with data governance and compliance teams to ensure adherence to data quality standards, security protocols, and regulatory guidelines. Collaboration & Stakeholder Engagement: Partner with business stakeholders, data scientists, and analysts to understand data requirements and deliver timely, accurate, and accessible data solutions. Continuous Improvement: Evaluate emerging technologies, recommend new tools, and lead proof-of-concept initiatives to continually improve data engineering capabilities. Required Skills and Experience 8+ years of progressive experience in Data Engineering , including at least 2 years in a leadership or managerial role. Proficiency in SQL with a deep understanding of query optimization and data modeling. Hands-on experience with Databricks , PySpark , and big data processing frameworks. Strong understanding of data workflows , orchestration tools, and data integration methodologies. Experience working with cloud platforms such as Azure , AWS , or GCP . Proven track record in developing scalable and maintainable data pipelines and workflows. Strong communication skills and the ability to work effectively with both technical and non-technical stakeholders. Familiarity with CI/CD for data pipelines, monitoring tools, and version control systems like Git. Preferred Qualifications Bachelor's or Master's degree in Computer Science, Engineering, or a related field. Certification in Databricks , Azure Data Engineer , or similar platforms is a plus. Exposure to data mesh or data lakehouse architecture is advantageous.
Posted 3 weeks ago
10.0 - 17.0 years
12 - 19 Lacs
Chennai, Bengaluru
Work from Office
Job Purpose: We are seeking an experienced ADF Technical Architect with over 10 to 17 years of proven expertise in Data lakes, Lake house, Synapse Analytic, Data bricks, Tsql, sql server, Synapse Db, Data warehouse and should have work exp in Prior experience as a Tech architect, technical lead, Sr. Data Engineer, or similar is required with strong communication skills. Requirements: We are seeking an experienced ADF Technical Architect with over 10 to 17 years of proven expertise in Data lakes, Lake house, Synapse Analytic, Data bricks, Tsql, sql server, Synapse Db, Data warehouse and should have work exp in Prior experience as a Tech architect, technical lead, Sr. Data Engineer, or similar is required with strong communication skills. The ideal candidate should have: Key Responsibilities: Participate in data strategy and road map exercises, data architecture definition, business intelligence/data warehouse solution and platform selection, design and blueprinting, and implementation. Lead other team members and provide technical leadership in all phases of a project from discovery and planning through implementation and delivery. Work experience in RFP, RFQ's. Work through all stages of a data solution life cycle: analyze/profile data, create conceptual, logical & physical data model designs, architect and design ETL, reporting, and analytics solutions. Lead source to target mapping, define interface process and standards, and implement the standards Perform Root Cause Analysis and develop data remediation solutions Develop and implement proactive monitoring and alert mechanism for data issues. Collaborate with other workstream leads to ensure the overall developments are in sync Identify risks and opportunities of potential logic and data issues within the data environment Guide, influence, and mentor junior members of the team Collaborate effectively with the onsite-offshore team and ensure day to day deliverables are met Qualifications & Key skills required: Bachelor's degree and 10+ years of experience in related data and analytics area Demonstrated knowledge of modern data solutions such as Azure Data Fabric, Synapse Analytics, Lake houses, Data lakes Strong source to target mapping experience and ETL principles/knowledge Prior experience as a Tech architect, technical lead, Sr. Data Engineer, or similar is required Excellent verbal and written communication skills. Strong quantitative and analytical skills with accuracy and attention to detail Ability to work well independently with minimal supervision and can manage multiple priorities Proven experiences with Azure, AWS, GCP, OCI and other modern technology platforms is required
Posted 3 weeks ago
5.0 - 7.0 years
6 - 10 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
More than 5+ years of experience in data modelling – designing, implementing, and maintaining data models to support data quality, performance and scalability. Proven experience as a Data Modeler and worked with data analysts, data architects and business stakeholders to ensure data models are aligned to business requirements. Expertise in data modeling tools (ER/Studio, Erwin, IDM/ARDM models, CPG / Manufacturing/Sales/Finance/Supplier/Customer domains ) Experience with at least one MPP database technology such as Databricks Lakehouse, Redshift, Synapse, Teradata, or Snowflake. Experience with version control systems like GitHub and deployment & CI tools. Experience of metadata management, data lineage, and data glossaries is a plus. Working knowledge of agile development, including DevOps and DataOps concepts. Working knowledge of SAP data models, particularly in the context of HANA and S/4HANA, Retails Data like IRI, Nielsen Retail. Must Have: Data Modeling, Data Modeling Tool experience, SQL Nice to Have: SAP HANA, Data Warehouse, Databricks, CPG"
Posted 3 weeks ago
5.0 - 10.0 years
15 - 30 Lacs
Noida, Hyderabad, Delhi / NCR
Work from Office
Job Role: Azure Data Engineer Location: Greater Noida & Hyderabad Experience: 5 to 10 years Notice Period: Immediate to 30 days Job Description: Bachelor's degree in Computer Science, Computer Engineering, Technology, Information Systems (CIS/MIS), Engineering or related technical discipline, or equivalent experience/training Many years software solution development using agile, DevOps, operating in a product model that includes designing, developing, and implementing large-scale applications or data engineering solutions 3 years Data Engineering experience using SQL 2 years of cloud development (prefer Microsoft Azure) including Azure EventHub, Azure Data Factory, Azure Databricks, Azure DevOps, Azure Blob Storage, Azure Power Apps and Power BI. Combination of Development, Administration & Support experience in several of the following tools/platforms required: a. Scripting: Python, PySpark, Unix, SQL b. Data Platforms: Teradata, SQL Server c. Azure Data Explorer. Administration skills are a plus d. Azure Cloud Technologies: Azure Data Factory, Azure Databricks, Azure Blob Storage, Azure Power Apps, and Azure Functions e. CI/CD: GitHub, Azure DevOps, Terraform
Posted 3 weeks ago
1.0 - 4.0 years
10 - 14 Lacs
Pune
Work from Office
Overview Design, develop, and maintain data pipelines and ETL/ELT processes using PySpark/Databricks/bigquery/Airflow/composer. Optimize performance for large datasets through techniques such as partitioning, indexing, and Spark optimization. Collaborate with cross-functional teams to resolve technical issues and gather requirements. Responsibilities Ensure data quality and integrity through data validation and cleansing processes. Analyze existing SQL queries, functions, and stored procedures for performance improvements. Develop database routines like procedures, functions, and views/MV. Participate in data migration projects and understand technologies like Delta Lake/warehouse/bigquery. Debug and solve complex problems in data pipelines and processes. Qualifications Bachelor’s degree in computer science, Engineering, or a related field. Strong understanding of distributed data processing platforms like Databricks and BigQuery. Proficiency in Python, PySpark, and SQL programming languages. Experience with performance optimization for large datasets. Strong debugging and problem-solving skills. Fundamental knowledge of cloud services, preferably Azure or GCP. Excellent communication and teamwork skills. Nice to Have: Experience in data migration projects. Understanding of technologies like Delta Lake/warehouse. What we offer you Transparent compensation schemes and comprehensive employee benefits, tailored to your location, ensuring your financial security, health, and overall wellbeing. Flexible working arrangements, advanced technology, and collaborative workspaces. A culture of high performance and innovation where we experiment with new ideas and take responsibility for achieving results. A global network of talented colleagues, who inspire, support, and share their expertise to innovate and deliver for our clients. Global Orientation program to kickstart your journey, followed by access to our Learning@MSCI platform, LinkedIn Learning Pro and tailored learning opportunities for ongoing skills development. Multi-directional career paths that offer professional growth and development through new challenges, internal mobility and expanded roles. We actively nurture an environment that builds a sense of inclusion belonging and connection, including eight Employee Resource Groups. All Abilities, Asian Support Network, Black Leadership Network, Climate Action Network, Hola! MSCI, Pride & Allies, Women in Tech, and Women’s Leadership Forum. At MSCI we are passionate about what we do, and we are inspired by our purpose – to power better investment decisions. You’ll be part of an industry-leading network of creative, curious, and entrepreneurial pioneers. This is a space where you can challenge yourself, set new standards and perform beyond expectations for yourself, our clients, and our industry. MSCI is a leading provider of critical decision support tools and services for the global investment community. With over 50 years of expertise in research, data, and technology, we power better investment decisions by enabling clients to understand and analyze key drivers of risk and return and confidently build more effective portfolios. We create industry-leading research-enhanced solutions that clients use to gain insight into and improve transparency across the investment process. MSCI Inc. is an equal opportunity employer. It is the policy of the firm to ensure equal employment opportunity without discrimination or harassment on the basis of race, color, religion, creed, age, sex, gender, gender identity, sexual orientation, national origin, citizenship, disability, marital and civil partnership/union status, pregnancy (including unlawful discrimination on the basis of a legally protected parental leave), veteran status, or any other characteristic protected by law. MSCI is also committed to working with and providing reasonable accommodations to individuals with disabilities. If you are an individual with a disability and would like to request a reasonable accommodation for any part of the application process, please email Disability.Assistance@msci.com and indicate the specifics of the assistance needed. Please note, this e-mail is intended only for individuals who are requesting a reasonable workplace accommodation; it is not intended for other inquiries. To all recruitment agencies MSCI does not accept unsolicited CVs/Resumes. Please do not forward CVs/Resumes to any MSCI employee, location, or website. MSCI is not responsible for any fees related to unsolicited CVs/Resumes. Note on recruitment scams We are aware of recruitment scams where fraudsters impersonating MSCI personnel may try and elicit personal information from job seekers. Read our full note on careers.msci.com
Posted 3 weeks ago
12.0 - 20.0 years
35 - 40 Lacs
Navi Mumbai
Work from Office
Job Title: Big Data Developer and Project Support & Mentorship Position Overview: We are seeking a skilled Big Data Developer to join our growing delivery team, with a dual focus on hands-on project support and mentoring junior engineers. This role is ideal for a developer who not only thrives in a technical, fast-paced environment but is also passionate about coaching and developing the next generation of talent. You will work on live client projects, provide technical support, contribute to solution delivery, and serve as a go-to technical mentor for less experienced team members. Key Responsibilities: Perform hands-on Big Data development work, including coding, testing, troubleshooting, and deploying solutions. Support ongoing client projects, addressing technical challenges and ensuring smooth delivery. Collaborate with junior engineers to guide them on coding standards, best practices, debugging, and project execution. Review code and provide feedback to junior engineers to maintain high quality and scalable solutions. Assist in designing and implementing solutions using Hadoop, Spark, Hive, HDFS, and Kafka. Lead by example in object-oriented development, particularly using Scala and Java. Translate complex requirements into clear, actionable technical tasks for the team. Contribute to the development of ETL processes for integrating data from various sources. Document technical approaches, best practices, and workflows for knowledge sharing within the team. Required Skills and Qualifications: 8+ years of professional experience in Big Data development and engineering. Strong hands-on expertise with Hadoop, Hive, HDFS, Apache Spark, and Kafka. Solid object-oriented development experience with Scala and Java. Strong SQL skills with experience working with large data sets. Practical experience designing, installing, configuring, and supporting Big Data clusters. Deep understanding of ETL processes and data integration strategies. Proven experience mentoring or supporting junior engineers in a team setting. Strong problem-solving, troubleshooting, and analytical skills. Excellent communication and interpersonal skills. Preferred Qualifications: Professional certifications in Big Data technologies (Cloudera, Databricks, AWS Big Data Specialty, etc.). Experience with cloud Big Data platforms (AWS EMR, Azure HDInsight, or GCP Dataproc). Exposure to Agile or DevOps practices in Big Data project environments. What We Offer: Opportunity to work on challenging, high-impact Big Data projects. Leadership role in shaping and mentoring the next generation of engineers. Supportive and collaborative team culture. Flexible working environment Competitive compensation and professional growth opportunities.
Posted 3 weeks ago
10.0 - 17.0 years
9 - 19 Lacs
Bengaluru
Remote
Azure Data Engineer Skills Req: Azure Data Engineer Big Data , hadoop Develop and maintain Data Pipelines using Azure services like Data Factory PysparkSynapse, Data Bricks Adobe,Spark Scala etc
Posted 3 weeks ago
7.0 - 12.0 years
18 - 33 Lacs
Navi Mumbai
Work from Office
About Us: Celebal Technologies is a leading Solution Service company that provide Services the field of Data Science, Big Data, Enterprise Cloud & Automation. We are at the forefront of leveraging cutting-edge technologies to drive innovation and enhance our business processes. As part of our commitment to staying ahead in the industry, we are seeking a talented and experienced Data & AI Engineer with strong Azure cloud competencies to join our dynamic team. Job Summary: We are looking for a highly skilled Azure Data Engineer with a strong background in real-time and batch data ingestion and big data processing, particularly using Kafka and Databricks . The ideal candidate will have a deep understanding of streaming architectures , Medallion data models , and performance optimization techniques in cloud environments. This role requires hands-on technical expertise , including live coding during the interview process. Key Responsibilities Design and implement streaming data pipelines integrating Kafka with Databricks using Structured Streaming . Architect and maintain Medallion Architecture with well-defined Bronze, Silver, and Gold layers . Implement efficient ingestion using Databricks Autoloader for high-throughput data loads. Work with large volumes of structured and unstructured data , ensuring high availability and performance. Apply performance tuning techniques such as partitioning, caching , and cluster resource optimization . Collaborate with cross-functional teams (data scientists, analysts, business users) to build robust data solutions. Establish best practices for code versioning , deployment automation , and data governance . Required Technical Skills: Strong expertise in Azure Databricks and Spark Structured Streaming Processing modes (append, update, complete) Output modes (append, complete, update) Checkpointing and state management Experience with Kafka integration for real-time data pipelines Deep understanding of Medallion Architecture Proficiency with Databricks Autoloader and schema evolution Deep understanding of Unity Catalog and Foreign catalog Strong knowledge of Spark SQL, Delta Lake, and DataFrames Expertise in performance tuning (query optimization, cluster configuration, caching strategies) Must have Data management strategies Excellent with Governance and Access management Strong with Data modelling, Data warehousing concepts, Databricks as a platform Solid understanding of Window functions Proven experience in: Merge/Upsert logic Implementing SCD Type 1 and Type 2 Handling CDC (Change Data Capture) scenarios Retail/Telcom/Energy any one industry expertise Real time use case execution Data modelling
Posted 3 weeks ago
10.0 - 14.0 years
30 - 45 Lacs
Noida, Hyderabad, Gurugram
Work from Office
Description Optum is a global organization that delivers care, aided by technology to help millions of people live healthier lives. The work you do with our team will directly improve health outcomes by connecting people with the care, pharmacy benefits, data and resources they need to feel their best. Here, you will find a culture guided by diversity and inclusion, talented peers, comprehensive benefits and career development opportunities. Come make an impact on the communities we serve as you help us advance health equity on a global scale. Join us to start Caring. Connecting. Growing together. As aSenior Data Engineerat Optum youll help us work on streamlining the flow of information and deliver insights to manage our various Data Analytics web applications which serve internal and external customers. This specific team is working on features such as OpenAI API integrations, working with customers to integrate disparate data sources into useable datasets, and configuring databases for our web application needs. Your work will contribute to lowering the overall cost of healthcare for our consumers and helping people live healthier lives. Primary Responsibilities: Data Pipeline Development: Develop and maintain data pipelines that extract, transform, and load (ETL) data from various sources into a centralized data storage system, such as a data warehouse or data lake. Ensure the smooth flow of data from source systems to destination systems while adhering to data quality and integrity standards Data Integration: Integrate data from multiple sources and systems, including databases, APIs, log files, streaming platforms, and external data providers. Handle data ingestion, transformation, and consolidation to create a unified and reliable data foundation for analysis and reporting Data Transformation and Processing: Develop data transformation routines to clean, normalize, and aggregate data. Apply data processing techniques to handle complex data structures, handle missing or inconsistent data, and prepare the data for analysis, reporting, or machine learning tasks Maintain and enhance existing application databases to support our many Data Analytic web applications, as well as working with our web developers on new requirements and applications Contribute to common frameworks and best practices in code development, deployment, and automation/orchestration of data pipelines Implement data governance in line with company standards Partner with Data Analytics and Product leaders to design best practices and standards for developing productional analytic pipelines Partner with Infrastructure leaders on architecture approaches to advance the data and analytics platform, including exploring new tools and techniques that leverage the cloud environment (Azure, Snowflake, others) Monitoring and Support: Monitor data pipelines and data systems to detect and resolve issues promptly. Develop monitoring tools, alerts, and automated error handling mechanisms to ensure data integrity and system reliability Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so You will be rewarded and recognized for your performance in an environment that will challenge you and give you clear direction on what it takes to succeed in your role, as well as providing development for other roles you may be interested in. Qualifications Required Qualifications: Extensive hands-on experience in developing data pipelines that demonstrate a solid understanding of software engineering principles Proficiency in Python, in fulfilling multiple general-purpose use-cases, and not limited to developing data APIs and pipelines Solid understanding of software engineering principles (micro-services applications and ecosystems) Fluent in SQL (Snowflake/SQL Server), with experience using Window functions and more advanced features Understanding of DevOps tools, Git workflow and building CI/CD pipelines Solid understanding of Airflow Proficiency in design and implementation of pipelines and stored procedures in SQL Server and Snowflake Demonstrated ability to work with business and technical audiences on business requirement meetings, technical white boarding exercises, and SQL coding or debugging sessions Preferred Qualifications: Bachelor’s Degreeor higherinDatabase Management, Information Technology, Computer Science or similar Experience with Azure Data Factory or Apache Airflow Experience with Azure Databricks or Snowflake Experience working in projects with agile/scrum methodologies Experience with shell scripting languages Experience working with Apache Kafka, building appropriate producer or consumer apps Experience with production quality ML and/or AI model development and deployment Experience working with Kubernetes and Docker, and knowledgeable about cloud infrastructure automation and management (e.g., Terraform) At UnitedHealth Group, our mission is to help people live healthier lives and make the health system work better for everyone. We believe everyone–of every race, gender, sexuality, age, location and income–deserves the opportunity to live their healthiest life. Today, however, there are still far too many barriers to good health which are disproportionately experienced by people of color, historically marginalized groups and those with lower incomes. We are committed to mitigating our impact on the environment and enabling and delivering equitable care that addresses health disparities and improves health outcomes — an enterprise priority reflected in our mission.
Posted 3 weeks ago
7.0 - 10.0 years
20 - 35 Lacs
Pune
Hybrid
At Medtronic you can begin a life-long career of exploration and innovation, while helping champion healthcare access and equity for all. Youll lead with purpose, breaking down barriers to innovation in a more connected, compassionate world. A Day in the Life Our Global Diabetes Capability Center in Pune is expanding to serve more people living with diabetes globally. Our state-of-the-art facility is dedicated to transforming diabetes management through innovative solutions and technologies that reduce the burden of living with diabetes. We’re a mission-driven leader in medical technology and solutions with a legacy of integrity and innovation, join our new Minimed India Hub as Senior Digital Engineer. Responsibilities may include the following and other duties may be assigned: Expertise in translating conceptual needs and business requirements into finalized architectural design. Able to manage large projects or processes that span across other collaborative teams both within and beyond Digital Technology. Operate autonomously to defines, describe, diagram and document the role and interaction of the high-level technological and human components that combine to provide cost effective and innovative solutions to meet evolving business needs. Promotes, guides and governs good architectural practice through the application of well-defined, proven technology and human interaction patterns and through architecture mentorship. Responsible for designing, developing, and maintaining scalable data pipelines, preferably using PySpark. Work with structured and unstructured data from various sources. Optimize and tune PySpark applications for performance and scalability. Deep experience supporting the full lifecycle management of the entire IT portfolio including the selection, appropriate usage, enhancement and replacement of information technology applications, infrastructure and services. Implement data quality checks and ensure data integrity. Monitor and troubleshoot data pipeline issues and ensure timely resolution. Document technical specifications and maintain comprehensive documentation for data pipelines. The ideal candidate is exposed to the fast-paced world of Big Data technology and has experience in building ETL/ELT data solutions using new and emerging technologies while maintaining stability of the platform. Required Knowledge and Experience: Have strong programming knowledge in Java, Scala, or Python or PySpark, SQL. 4-8 years of experience in data engineering, with a focus on PySpark. Proficiency in Python and Spark, with strong coding and debugging skills. Have experience in designing and building Enterprise Data solutions on AWS Cloud or Azure, or Google Cloud Platform (GCP). Experience with big data technologies such as Hadoop, Hive, and Kafka. Strong knowledge of SQL and experience with relational databases (e.g., PostgreSQL, MySQL, SQL Server). Experience with data warehousing solutions like Redshift, Snowflake, Databricks or Google Big Query. Familiarity with data lake architectures and data storage solutions. Knowledge of CI/CD pipelines and version control systems (e.g., Git). Excellent problem-solving skills and the ability to troubleshoot complex issues. Strong communication and collaboration skills, with the ability to work effectively in a team environment. Physical Job Requirements The above statements are intended to describe the general nature and level of work being performed by employees assigned to this position, but they are not an exhaustive list of all the required responsibilities and skills of this position. Regards, Ashwini Ukekar Sourcing Specialist
Posted 3 weeks ago
7.0 - 11.0 years
20 - 30 Lacs
Hyderabad
Hybrid
Primary Responsibilities: Design, code, test, document, and maintain high-quality and scalable data pipelines/solutions in cloud Work in both dev and ops and should be open to work in ops with flexible timings in ops Ingest and transform data using variety of technologies from variety of sources (APIs, streaming, Files, Databases) Develop reusable patterns and encourage innovation that will increase team’s velocity Design and develop applications in an agile environment, deploy using CI/CD Participate with prototypes as well as design and code reviews, own or assist with incident and problem management Self-starter who can learn things quickly, who is enthusiastic and actively engaged Comply with the terms and conditions of the employment contract, company policies and procedures, and any and all directives (such as, but not limited to, transfer and/or re-assignment to different work locations, change in teams and/or work shifts, policies in regards to flexibility of work benefits and/or work environment, alternative work arrangements, and other decisions that may arise due to the changing business environment). The Company may adopt, vary or rescind these policies and directives in its absolute discretion and without any limitation (implied or otherwise) on its ability to do so Qualifications - External Required Qualifications: Bachelor's degree in technical domain. Required experience with the following: Databricks, Python, Spark, pyspark, SQL, Azure Data factory Design and Implementation of Datawarehouse/Datalake (Databricks/snowflake) Data architecture, Data modelling Operations Processes, reporting from operations, Incident resolutions Github actions/Jenkins or similar CICD tool, Cloud CICD, GitHub NoSQL and relational databases Preferred Qualifications: Experience or knowledge in Apache Kafka Experience or knowledge in Data ingestions from variety of API’s Working in Agile/Scrum environment
Posted 3 weeks ago
6.0 - 11.0 years
16 - 30 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Job Description: Data Architect - Azure with MS Fabric Location: Pune/Bangalore/Hyderabad Experience: 6+ Years Job Summary: We're seeking an experienced Data Engineer Lead to architect, design, and implement data solutions using Microsoft Fabric. The successful candidate will lead a team of data engineers, collaborating with stakeholders to deliver scalable, efficient, and reliable data pipelines. Strong technical expertise in MS Fabric, data modeling, and data warehousing is required. Key Responsibilities: Design and implement data solutions using MS Fabric, including data pipelines, data warehouses, and data lakes Lead and mentor a team of data engineers, providing technical guidance and oversight Collaborate with stakeholders to understand data requirements and deliver data-driven solutions Develop and maintain large-scale data systems, ensuring data quality, integrity, and security Troubleshoot data pipeline issues and optimize data workflows for performance and scalability Stay up-to-date with MS Fabric features and best practices, applying knowledge to improve data solutions Requirements: 5+ years of experience in data engineering, with expertise in MS Fabric, Azure Data Factory, or similar technologies Strong programming skills in languages like Python, SQL, or C# Experience with data modeling, data warehousing, and data governance Excellent problem-solving skills, with ability to troubleshoot complex data pipeline issues Strong communication and leadership skills, with experience leading teams
Posted 3 weeks ago
7.0 - 12.0 years
3 - 7 Lacs
Gurugram
Work from Office
AHEAD builds platforms for digital business. By weaving together advances in cloud infrastructure, automation and analytics, and software delivery, we help enterprises deliver on the promise of digital transformation. AtAHEAD, we prioritize creating a culture of belonging,where all perspectives and voices are represented, valued, respected, and heard. We create spaces to empower everyone to speak up, make change, and drive the culture at AHEAD. We are an equal opportunity employer,anddo not discriminatebased onan individual's race, national origin, color, gender, gender identity, gender expression, sexual orientation, religion, age, disability, maritalstatus,or any other protected characteristic under applicable law, whether actual or perceived. We embraceall candidatesthatwillcontribute to the diversification and enrichment of ideas andperspectives atAHEAD. AHEAD is looking for a Sr. Data Engineer (L3 support) to work closely with our dynamic project teams (both on-site and remotely). This Data Engineer will be responsible for hands-on engineering of Data platforms that support our clients advanced analytics, data science, and other data engineering initiatives. This consultant will build and support modern data environments that reside in the public cloud or multi-cloud enterprise architectures. The Data Engineer will have responsibility for working on a variety of data projects. This includes orchestrating pipelines using modern Data Engineering tools/architectures as well as design and integration of existing transactional processing systems. The appropriate candidate must be a subject matter expert in managing data platforms. Responsibilities: A Sr. Data Engineer should be able to build, operationalize and monitor data processing systems Create robust and automated pipelines to ingest and process structured and unstructured data from various source systems into analytical platforms using batch and streaming mechanisms leveraging cloud native toolset Implement custom applications using tools such as EventHubs, ADF and other cloud native tools as required to address streaming use cases Engineers and maintain ELT processes for loading data lake (Cloud Storage, data lake gen2) Leverages the right tools for the right job to deliver testable, maintainable, and modern data solutions Respond to customer/team inquiries and escalations and assist in troubleshooting and resolving challenges Works with other scrum team members to estimate and deliver work inside of a sprint Research data questions, identifies root causes, and interacts closely with business users and technical resources Should possess ownership and leadership skills to collaborate effectively with Level 1 and Level 2 teams. Must have experience in raising tickets with Microsoft and engaging with them to address any service or tool outages in production. Qualifications: 7+ years of professional technical experience 5+ years of hands-on Data Architecture and Data Modelling SME level 5+ years of experience building highly scalable data solutions using Azure data factory, Spark, Databricks, Python 5+ years of experience working in cloud environments (AWS and/or Azure) 3+ years of programming languages such as Python, Spark and Spark SQL. Should have strong knowledge on architecture of ADF and Databricks. Able to work with Level1 and Level 2 teams to resolve platform outages in production environments. Strong client-facing communication and facilitation skills Strong sense of urgency, ability to set priorities and perform the job with little guidance Excellent written and verbal interpersonal skills and the ability to build and maintain collaborative and positive working relationships at all levels Strong interpersonal and communication skills (Written and oral) required Should be able to work in shifts Should have knowledge on azure Dev Ops process. Key Skills: Azure Data Factory, Azure Data bricks, Python, ETL/ELT, Spark, Data Lake, Data Engineering, EventHubs, Azure delta, Spark streaming Why AHEAD: Through our daily work and internal groups like Moving Women AHEAD and RISE AHEAD, we value and benefit from diversity of people, ideas, experience, and everything in between. We fuel growth by stacking our office with top-notch technologies in a multi-million-dollar lab, by encouraging cross department training and development, sponsoring certifications and credentials for continued learning. USA Employment Benefits include - Medical, Dental, and Vision Insurance - 401(k) - Paid company holidays - Paid time off - Paid parental and caregiver leave - Plus more! See benefits https://www.aheadbenefits.com/ for additional details. The compensation range indicated in this posting reflects the On-Target Earnings (OTE) for this role, which includes a base salary and any applicable target bonus amount. This OTE range may vary based on the candidates relevant experience, qualifications, and geographic location.
Posted 3 weeks ago
4.0 - 7.0 years
7 - 11 Lacs
Bengaluru
Work from Office
Who We Are Applied Materials is the global leader in materials engineering solutions used to produce virtually every new chip and advanced display in the world. We design, build and service cutting-edge equipment that helps our customers manufacture display and semiconductor chips- the brains of devices we use every day. As the foundation of the global electronics industry, Applied enables the exciting technologies that literally connect our world- like AI and IoT. If you want to work beyond the cutting-edge, continuously pushing the boundaries of"science and engineering to make possible"the next generations of technology, join us to Make Possible® a Better Future. What We Offer Location: Bangalore,IND At Applied, we prioritize the well-being of you and your family and encourage you to bring your best self to work. Your happiness, health, and resiliency are at the core of our benefits and wellness programs. Our robust total rewards package makes it easier to take care of your whole self and your whole family. Were committed to providing programs and support that encourage personal and professional growth and care for you at work, at home, or wherever you may go. Learn more about our benefits . Youll also benefit from a supportive work culture that encourages you to learn, develop and grow your career as you take on challenges and drive innovative solutions for our customers."We empower our team to push the boundaries of what is possible"”while learning every day in a supportive leading global company. Visit our Careers website to learn more about careers at Applied. Summary Requires in-depth knowledge and experience. Uses best practices and knowledge of internal or external business issues to improve products or services. Solves complex problems; takes a new perspective using existing solutions. Works independently, receives minimal guidance. Acts as a resource for colleagues with less experience. Key Responsibilities Participates in the design, development and maintenance of ongoing metrics, reports, analyses, dashboards, etc. used to drive key business decisions. Works within controls to ensure the accuracy, timeliness and confidentiality of all managerial and business intelligence reports, views, dashboards, and user data. Implements new business reports within the established reporting framework, policies, and procedures. Trains and supports end users on accessing of reports. Analyzes and interprets data and business intelligence reports. Works with others in the development of data warehouses and other data sources to support managerial and business intelligence reporting needs. Works with business intelligence manager and other staff to assess various reporting needs. Analyzes reporting needs and requirements, assesses current reporting in the context of strategic goals and devise plans for delivering the most appropriate reporting solutions to users. Educates & trains user community on the potential uses of the business intelligence system. Extracts financial, statistical, and other data from various information systems and other sources. Designs and develops recurring and ad-hoc business intelligence solutions such as reports, cubes, and dashboards, using industry best practices for presentation, efficiency, and user friendliness. Designs reports to aid in the efficient assimilation of the data by users by engaging exception reporting, Functional Knowledge Demonstrates conceptual and practical expertise in own discipline and basic knowledge of related disciplines. Business Expertise Has knowledge of best practices and how own area integrated with others; is aware of the competition and the factors that differentiate them in the market. Leadership Acts as a resource for colleagues with less experience; may lead small projects with manageable risks and resource requirements. Problem Solving Solves complex problems; takes a new perspective on existing solutions; exercises judgment based on the analysis of multiple sources of information. Impact Impacts a range of customer, operational, project or service activities within own team and other related teams; works within broad guidelines and policies. Interpersonal Skills Explains difficult or sensitive information; works to build consensus. Key Responsibilities: Supports the design and development of program methods, processes, and systems to consolidate and analyze structured and unstructured, diverse "big data" sources. Interfaces with internal customers for requirements analysis and compiles data for scheduled or special reports and analysis Supports project teams to develop analytical models, algorithms and automated processes, applying SQL understanding and Python programming, to cleanse, integrate and evaluate large datasets. Supports the timely development of products for manufacturing and process information by applying sophisticated data analytics. Azure, Data bricks, python, SQL Qualification: Bachelors/Masters degree or relevant 4 - 7 years of experience as data analyst Required technical skills in SQL, Azure, Python, Databricks, Tableau (good to have) Experience in Supply Chain domain. Additional Information Time Type: Full time Employee Type: Assignee / Regular Travel: Yes, 10% of the Time Relocation Eligible: Yes Applied Materials is an Equal Opportunity Employer. Qualified applicants will receive consideration for employment without regard to race, color, national origin, citizenship, ancestry, religion, creed, sex, sexual orientation, gender identity, age, disability, veteran or military status, or any other basis prohibited by law.
Posted 3 weeks ago
5.0 - 7.0 years
14 - 18 Lacs
Bengaluru
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Total 5 - 7+ years of experience in Data Management (DW, DL, Data Platform, Lakehouse) and Data Engineering skills Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on Azure Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Exposure to streaming solutions and message brokers like Kafka technologies Experience Unix / Linux Commands and basic work experience in Shell Scripting Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 3 weeks ago
5.0 - 8.0 years
9 - 14 Lacs
Hyderabad
Work from Office
Role Purpose The purpose of the role is to support process delivery by ensuring daily performance of the Production Specialists, resolve technical escalations and develop technical capability within the Production Specialists. Do Oversee and support process by reviewing daily transactions on performance parameters Review performance dashboard and the scores for the team Support the team in improving performance parameters by providing technical support and process guidance Record, track, and document all queries received, problem-solving steps taken and total successful and unsuccessful resolutions Ensure standard processes and procedures are followed to resolve all client queries Resolve client queries as per the SLAs defined in the contract Develop understanding of process/ product for the team members to facilitate better client interaction and troubleshooting Document and analyze call logs to spot most occurring trends to prevent future problems Identify red flags and escalate serious client issues to Team leader in cases of untimely resolution Ensure all product information and disclosures are given to clients before and after the call/email requests Avoids legal challenges by monitoring compliance with service agreements Handle technical escalations through effective diagnosis and troubleshooting of client queries Manage and resolve technical roadblocks/ escalations as per SLA and quality requirements If unable to resolve the issues, timely escalate the issues to TA & SES Provide product support and resolution to clients by performing a question diagnosis while guiding users through step-by-step solutions Troubleshoot all client queries in a user-friendly, courteous and professional manner Offer alternative solutions to clients (where appropriate) with the objective of retaining customers and clients business Organize ideas and effectively communicate oral messages appropriate to listeners and situations Follow up and make scheduled call backs to customers to record feedback and ensure compliance to contract SLAs Build people capability to ensure operational excellence and maintain superior customer service levels of the existing account/client Mentor and guide Production Specialists on improving technical knowledge Collate trainings to be conducted as triage to bridge the skill gaps identified through interviews with the Production Specialist Develop and conduct trainings (Triages) within products for production specialist as per target Inform client about the triages being conducted Undertake product trainings to stay current with product features, changes and updates Enroll in product specific and any other trainings per client requirements/recommendations Identify and document most common problems and recommend appropriate resolutions to the team Update job knowledge by participating in self learning opportunities and maintaining personal networks Deliver NoPerformance ParameterMeasure 1ProcessNo. of cases resolved per day, compliance to process and quality standards, meeting process level SLAs, Pulse score, Customer feedback, NSAT/ ESAT 2Team ManagementProductivity, efficiency, absenteeism 3Capability developmentTriages completed, Technical Test performance Mandatory Skills: DataBricks - Data Engineering. Experience: 5-8 Years.
Posted 3 weeks ago
4.0 - 9.0 years
15 - 25 Lacs
Hyderabad, Chennai
Work from Office
Interested can also apply with sanjeevan.natarajan@careernet.in Role & responsibilities Technical Leadership Lead a team of data engineers and developers; define technical strategy, best practices, and architecture for data platforms. End-to-End Solution Ownership Architect, develop, and manage scalable, secure, and high-performing data solutions on AWS and Databricks. Data Pipeline Strategy Oversee the design and development of robust data pipelines for ingestion, transformation, and storage of large-scale datasets. Data Governance & Quality Enforce data validation, lineage, and quality checks across the data lifecycle. Define standards for metadata, cataloging, and governance. Orchestration & Automation Design automated workflows using Airflow, Databricks Jobs/APIs, and other orchestration tools for end-to-end data operations. Cloud Cost & Performance Optimization Implement performance tuning strategies, cost optimization best practices, and efficient cluster configurations on AWS/Databricks. Security & Compliance Define and enforce data security standards, IAM policies, and compliance with industry-specific regulatory frameworks. Collaboration & Stakeholder Engagement Work closely with business users, analysts, and data scientists to translate requirements into scalable technical solutions. Migration Leadership Drive strategic data migrations from on-prem/legacy systems to cloud-native platforms with minimal risk and downtime. Mentorship & Growth Mentor junior engineers, contribute to talent development, and ensure continuous learning within the team. Preferred candidate profile Python , SQL , PySpark , Databricks , AWS (Mandatory) Leadership Experience in Data Engineering/Architecture Added Advantage: Experience in Life Sciences / Pharma
Posted 3 weeks ago
5.0 - 10.0 years
6 - 18 Lacs
Bengaluru
Work from Office
We are looking a skilled and proactive Data Engineer with hands-on experience in Azure Data Services & Microsoft Fabric. In this role, youll be responsible for building robust, scalable data pipelines and enabling enterprise grade analytic solutions.
Posted 3 weeks ago
10.0 - 15.0 years
30 - 40 Lacs
Hyderabad, Pune, Greater Noida
Work from Office
Responsibilities: * Design and build data architecture frameworks leveraging Azure services (Azure Data Factory, Azure Synapse Analytics, Azure Data Lake Storage, Azure SQL Database, ADLS Gen2, Synapse Engineering, Fabric Notebook, Pyspark, Scala, Python etc.). * Define and implement reference architectures and architecture blueprinting. * Experience demonstrating and ability to talk about wide variety of data engineering tools, architectures across cloud providers Especially on Azure platform. * Experience in building Data Product, data processing frameworks, Metadata Driven ETL pipelines , Data Security, Data Standardization, Data Quality and Data Reconciliation workflows. *Vast experience on building data product on MS AZURE / Fabric platform, Azure Managed instance, Microsoft Fabrics, Lakehouse, Synapse Engineering, MS onelake. Requirements:- * 10+ years of experience in Data Warehousing and Azure Cloud technologies. * Strong hands-on experience with Azure Fabrics, Synapse, ADf, SQL, Python/PySpark. * Proven expertise in designing and implementing data architectures on Azure using Microsoft fabric, azure synapse, ADF, MS fabric notebook * Exposure to Azure DevOps and Business Intelligence. * Solid understanding of data governance, data security, and compliance. * Excellent communication and collaboration skills.
Posted 3 weeks ago
6.0 - 10.0 years
15 - 20 Lacs
Bengaluru
Work from Office
Strong proficiency in Microsoft Azure cloud platform services (Azure Data Factory, Azure Data Bricks, Azure SQL Database, Azure Data Lake Storage, Azure Synapse Analytics). Experience in designing and implementing scalable data architectures. Proficient in ETL processes and tools. Advanced SQL and programming skills (e.g., Python, Scala). Familiarity with data warehousing and data modeling concepts. Excellent problem-solving and troubleshooting skills. Strong communication and collaboration skills. Ability to work independently and as part of a team. Certifications in relevant Azure technologies are a plus.
Posted 3 weeks ago
7.0 - 8.0 years
0 - 1 Lacs
Hyderabad
Work from Office
Title: Databricks Platform Administrator Location: Hyderabad Duration: Full Time (C2H) Experience: 7-8 Years Environment: AWS/Azure Duration: Databricks Platform Administrator to manage and support the day-to-day operations, configuration, and performance of the Databricks Lakehouse Platform. The ideal candidate will be responsible for user provisioning, workspace and cluster management, job scheduling, monitoring, and ensuring platform stability and security. This role requires hands-on experience with Databricks on AWS or Azure, integration with data pipelines, and managing platform-level configurations, libraries, and access controls (Unity Catalog, SCIM, IAM, etc.). The candidate should be familiar with DevOps practices, automation scripting, and collaboration with data engineering and infrastructure teams to support enterprise data initiatives.
Posted 3 weeks ago
7.0 - 8.0 years
7 - 9 Lacs
Bengaluru
Work from Office
We are seeking an experienced Data Engineer to join our innovative data team data team and help build scalable data infrastructure, software consultancy, and development services that powers business intelligence, analytics, and machine learning initiatives. The ideal candidate will design, develop, and maintain robust high-performance data pipelines and solutions while ensuring data quality, reliability, and accessibility across the organization working with cutting-edge technologies like Python, Microsoft Fabric, Snowflake, Dataiku, SQL Server, Oracle, PostgreSQL, etc. Required Qualifications 5 + years of experience in Data engineering role. Programming Languages: Proficiency in Python Cloud Platforms: Hands-on experience with Azure (Fabric, Synapse, Data Factory, Event Hubs) Databases: Strong SQL skills and experience with both relational (Microsoft SQL Server, PostgreSQL, MySQL) and NoSQL (MongoDB, Cassandra) databases Version Control: Proficiency with Git and collaborative development workflows Proven track record of building production-grade data pipelines handling large-scale data or solutions. Desired experience with containerization (Docker) and orchestration (Kubernetes) technologies . Knowledge of machine learning workflows and MLOps practices Familiarity with data visualization tools (Tableau, Looker, Power BI) Experience with stream processing and real-time analytics Experience with data governance and compliance frameworks (GDPR, CCPA) Contributions to open-source data engineering projects Relevant Cloud certifications (e.g., Microsoft Certified: Azure Data Engineer Associate, AWS Certified Data Engineer, Google Cloud Professional Data Engineer). Specific experience or certifications in Microsoft Fabric, or Dataiku, Snowflake.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
20312 Jobs | Dublin
Wipro
11977 Jobs | Bengaluru
EY
8165 Jobs | London
Accenture in India
6667 Jobs | Dublin 2
Uplers
6464 Jobs | Ahmedabad
Amazon
6352 Jobs | Seattle,WA
Oracle
5993 Jobs | Redwood City
IBM
5803 Jobs | Armonk
Capgemini
3897 Jobs | Paris,France
Tata Consultancy Services
3776 Jobs | Thane