Jobs
Interviews

101 Synapse Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

2.0 - 6.0 years

0 Lacs

pune, maharashtra

On-site

You should have at least 2 years of professional work experience in implementing data pipelines using Databricks and datalake. A minimum of 3 years of hands-on programming experience in Python within a cloud environment (preferably AWS) is necessary for this role. Having 2 years of professional work experience with real-time streaming systems such as Event Grid and Event topics would be highly advantageous. You must possess expert-level knowledge of SQL to write complex, highly-optimized queries for processing large volumes of data effectively. Experience in developing conceptual, logical, and/or physical database designs using tools like ErWin, Visio, or Enterprise Architect is expected. A minimum of 2 years of hands-on experience working with databases like Snowflake, Redshift, Synapse, Oracle, SQL Server, Teradata, Netezza, Hadoop, MongoDB, or Cassandra is required. Knowledge or experience in architectural best practices for building data lakes is a must for this position. Strong problem-solving and troubleshooting skills are necessary, along with the ability to make sound judgments independently. You should be capable of working independently and providing guidance to junior data engineers. If you meet the above requirements and are ready to take on this challenging role, we look forward to your application. Warm Regards, Rinka Bose Talent Acquisition Executive Nivasoft India Pvt. Ltd. Mobile: +91-9632249758 (INDIA) | 732-334-3491 (U.S.A) Email: rinka.bose@nivasoft.com | Web: https://nivasoft.com/,

Posted 1 day ago

Apply

10.0 - 14.0 years

0 Lacs

pune, maharashtra

On-site

You are invited to join our team as a Microsoft Fabric Data Modelling Specialist at Persistent Ltd. The position requires a highly skilled and motivated individual with over 10 years of experience in the field. As an Architect- Data, MS Fabric, you will be responsible for designing and implementing data models using Microsoft Fabric, managing SQL databases, and working with Azure and Synapse for cloud data management. Your role will involve collaborating with cross-functional teams, following Scrum methodology, and analyzing complex data to provide insights and recommendations to stakeholders. To excel in this role, you must possess proficiency in Microsoft Fabric Data Modelling, a strong knowledge of SQL and database management, as well as experience with Azure and Synapse. A bachelor's degree in computer science, Information Systems, or a related field is required, while a Master's degree is preferred. Additionally, familiarity with Spark, experience with Scrum methodology, knowledge of other cloud platforms like AWS or Google Cloud, and an understanding of data warehousing and ETL processes are valuable assets. Moreover, proficiency in Python or other scripting languages, experience with Power BI or other data visualization tools, knowledge of Big Data technologies like Hadoop or Hive, familiarity with machine learning algorithms and data science, as well as experience with Agile project management are desired qualities. Strong communication and team collaboration skills are essential for success in this role. At Persistent Ltd., we offer a competitive salary and benefits package, a culture focused on talent development, and opportunities to work with cutting-edge technologies. Employee engagement initiatives, annual health check-ups, and insurance coverage for self, spouse, two children, and parents are some of the benefits you can enjoy by joining our team. We are committed to fostering diversity and inclusion in the workplace, providing hybrid work options, and flexible working hours to accommodate various needs and preferences. If you are passionate about accelerating your growth professionally and personally, impacting the world positively with the latest technologies, and enjoying collaborative innovation in a values-driven and people-centric work environment, then Persistent Ltd. is the place for you. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. Join us in unlocking global opportunities and unleashing your full potential at Persistent.,

Posted 2 days ago

Apply

5.0 - 10.0 years

20 - 30 Lacs

Pune, Gurugram, Jaipur

Hybrid

Job Title: Data Engineer Azure Stack Locations: Chennai, Hyderabad, Bengaluru, Gurugram, Jaipur, Bhopal, Pune (Hybrid 3 days/week in office) Experience: 5+ Years Type: Full-time Apply: Share your resume with the details listed below to vijay.s@xebia.com Availability: Immediate joiners or max 2 weeks' notice period only About the Role Xebia is hiring a Data Engineer (Azure) to design and deliver scalable data pipelines using Microsofts cloud ecosystem. In this role, youll work on modernizing enterprise data platforms with Azure services, build streaming and batch data pipelines, and contribute to end-to-end data transformations across varied data sources. Key Responsibilities Design and develop robust data pipelines using Azure Data Factory, Event Hub, Cosmos DB, Synapse, SQL DB, Databricks Build scalable distributed processing solutions using Python, Spark, and PySpark Work on SQL-based development and large-scale data integration projects Implement Lambda Architecture and contribute to Modern Data Warehouse practices Utilize Azure Data Lake Analytics, Azure SQL DW , and streaming technologies Collaborate with Agile teams and follow best practices in development and testing Write scripts using Shell or similar scripting languages Ensure data quality, scalability, and performance throughout data workflows Must-Have Skills 5+ years of overall experience in data engineering 2+ years of hands-on experience with Azure Data Engineering Stack (ADF, Event Hub, Synapse, Databricks, Cosmos DB, SQL DB, Data Explorer) 3+ years of experience with Python/Spark/PySpark Strong SQL and coding capabilities Familiarity with Azure Data Lake Analytics, U-SQL, Azure SQL DW Solid understanding of Lambda Architecture and data warehousing principles Experience working in Agile teams and modern DevOps environments Excellent communication and analytical skills Good-to-Have Skills Azure Data Engineer certification Experience with streaming use cases Knowledge of software development best practices Strong organizational and independent problem-solving abilities Why Xebia? At Xebia, you’ll work with innovative teams and global clients, solving high-impact challenges with modern cloud and data technologies. We foster continuous learning, open collaboration, and bold thinking to drive digital transformation. To Apply Please share your updated resume and include the following details in your email to vijay.s@xebia.com : Full Name: Total Experience: Current CTC: Expected CTC: Current Location: Preferred Xebia Location (Chennai / Hyderabad / Bengaluru / Gurugram / Jaipur / Bhopal / Pune): Notice Period / Last Working Day (if serving): Primary Skills: LinkedIn Profile URL: Note: Only candidates who can join immediately or within 2 weeks will be considered. Join Xebia and engineer modern cloud data solutions with cutting-edge Azure tools.

Posted 2 days ago

Apply

3.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Data Engineer specializing in ETL, you should possess a minimum of 7 to 8 years of relevant experience in the field. This position is open across Pan India, and immediate joiners are highly preferred. You will be expected to demonstrate expertise in a range of mandatory skills, including ETL Developer, Synapse, Pyspark, ADF, SSIS, Databricks, SQL, Apache Airflow, and proficiency in Azure & AWS. It is important to note that proficiency in all the mentioned skills is a prerequisite for this role. The selection process for this position involves a total of three rounds - L1 with the External Panel, L2 with the Internal Panel, and L3 with the Client Round. Your responsibilities will include working as an ETL Developer for at least 7+ years, demonstrating proficiency in Pyspark for 5+ years, SSIS for 3 to 4+ years, Databricks for 4 to 4+ years, SQL for 6+ years, Apache Airflow for 4+ years, and experience in Azure and AWS for 3 to 4 years. Additionally, familiarity with Synapse for 3 to 4 years is required to excel in this role.,

Posted 3 days ago

Apply

15.0 - 19.0 years

0 Lacs

noida, uttar pradesh

On-site

As a Senior AI Architect at Dailoqa, you will play a pivotal role in shaping, designing, and delivering agentic AI solutions that drive real-world business value. You will collaborate with business and technology stakeholders, lead cross-functional teams, and ensure our AI architectures are robust, scalable, and aligned with our vision of combined intelligence and financial inclusion. Agentic AI Solution Design: Collaborate with stakeholders to identify high-impact agentic AI use cases, define success metrics, and determine data requirements tailored to Financial Services clients. Architect and oversee the implementation of end-to-end agentic AI solutions aligned with Dailoqa's strategic objectives and client needs. Leadership & Cross-Functional Collaboration: Lead and mentor cross-functional teams in the development and deployment of scalable agentic AI applications and infrastructures. Work closely with business stakeholders to translate complex requirements into actionable AI architecture and technical roadmaps. Technology Evaluation & Governance: Evaluate, recommend, and integrate advanced AI/ML platforms, frameworks, and technologies that enable agentic AI capabilities. Develop and enforce AI governance frameworks, best practices, and ethical standards, ensuring compliance with industry regulations and responsible AI principles. Performance Optimization & Continuous Improvement: Optimize AI models for performance, scalability, and efficiency, leveraging cloud-native and distributed computing resources. Stay ahead of emerging trends in agentic AI, machine learning, and data science, applying new insights to enhance solution quality and business impact. Technical Leadership & Talent Development: Provide technical leadership, mentorship, and code review for junior and peer team members. Participate in the hiring, onboarding, and development of AI talent, fostering a culture of innovation and excellence. Lead sprint planning, technical assessments, and ensure high standards in code quality and solution delivery. Required Qualifications: - 15+ years of Total experience. 8+ years in machine learning, and data science and more recent experience (4-5 yrs) in gen AI models applying AI to practical, comprehensive technology solutions and AI Consultancy. - Knowledge of basic algorithms, object-oriented and functional design principles, and best-practice patterns. - Experience in implementing GenAI, NLP, computer vision, or other AI frameworks/technologies. Tools & Technology: - LLMs and implementing RAG or different prompt strategies. - Azure OpenAI, Off the shelf Platform native AI tools and Models. - Knowledge of ML pipeline orchestration tools. - Experienced in python; ideally working knowledge of various supporting packages. - Experience in REST API development, NoSQL database design, and RDBMS design and optimizations. - Strong experience in Data engineering and aligned Hyperscale Platforms e.g. Databricks, Synapse, Fivetran etc. Education and Others Skills: - Master's or Ph.D. in Computer Science, Data Science, or related field. - Extensive experience with modern AI frameworks, cloud platforms, and big data technologies. - Strong background in designing and implementing AI solutions for enterprise-level applications. - Proven ability to lead and mentor technical teams. - Excellent communication skills with the ability to explain complex AI concepts to both technical and non-technical audiences. - Deep understanding of AI ethics and responsible AI practices. Working at Dailoqa will provide you with an opportunity to be part of a dynamic and innovative team that values collaboration, innovation, and continuous learning. If you are proactive, adaptable, and passionate about leveraging AI to solve real-world challenges in the financial services industry, then this role might be the perfect fit for you.,

Posted 6 days ago

Apply

5.0 - 10.0 years

0 - 0 Lacs

Bengaluru

Hybrid

Role & responsibilities C++ (Must Have), Python, Synapse

Posted 6 days ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

As a Data Scientist at KPMG in India, you will collaborate with business stakeholders and cross-functional subject matter experts to gain a deep understanding of the business context and key questions. You will be responsible for creating Proof of Concepts (POCs) and Minimum Viable Products (MVPs), and guiding them through production deployment and operationalization. Your role will involve influencing machine learning strategy for digital programs and projects, while making solution recommendations that balance speed to market and analytical soundness. In this position, you will explore design options to assess efficiency and impact, develop approaches to enhance robustness and rigor, and develop analytical and modeling solutions using various tools such as Python, R, and TensorFlow. You will be tasked with formulating model-based solutions by integrating machine learning algorithms with other techniques like simulations, as well as designing, adapting, and visualizing solutions based on evolving requirements. Your responsibilities will also include creating algorithms to extract information from large datasets, deploying algorithms to production to identify actionable insights, and comparing results from different methodologies to recommend optimal techniques. Moreover, you will work on multiple facets of AI including cognitive engineering, conversational bots, and data science, ensuring that solutions demonstrate high levels of performance, security, scalability, and maintainability upon deployment. As a part of your role, you will lead discussions at peer reviews and utilize interpersonal skills to positively influence decision-making processes. You will provide thought leadership and subject matter expertise in machine learning techniques, tools, and concepts, making significant contributions to internal discussions on emerging practices. Additionally, you will facilitate the sharing of new ideas, learnings, and best practices across different geographies. To qualify for this position, you must hold a Bachelor of Science or Bachelor of Engineering degree at a minimum, along with 2-4 years of work experience as a Data Scientist. You should possess a blend of business focus, strong analytical and problem-solving skills, and programming knowledge. Proficiency in statistical concepts, ML algorithms, statistical/programming software (e.g., R, Python), data querying languages (e.g., SQL, Hadoop/Hive, Scala), and experience with data management tools like Microsoft Azure or AWS are essential requirements. Moreover, you should have hands-on experience in feature engineering, hyperparameter optimization, and producing high-quality code, tests, and documentation. Familiarity with Agile principles and processes, and the ability to lead, manage, and deliver business results through data scientists or professional services teams are also crucial. Excellent communication skills, self-motivation, proactive problem-solving abilities, and the capability to work both independently and in teams are vital for this role. While a Bachelor's or Master's degree in technology-related fields is preferred, possessing relevant experience in Software Engineering or Data Science is mandatory. Additionally, familiarity with AI frameworks, deep learning, computer vision, and cloud services along with proficiency in Python, SQL, Docker, and versioning tools are highly desirable. The ideal candidate will also have experience with Agent Framework, RAG Framework, AI algorithms, and other relevant technologies as mentioned in the job description.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

pune, maharashtra

On-site

As a PL SQL Developer with 5 to 7 years of experience, you will be based in Pune/hybrid with an immediate to 15 days notice period. You must possess expertise in languages such as SQL, T-SQL, PL/SQL, and Python libraries like PySpark, Pandas, NumPy, Matplotlib, and Seaborn, along with databases like SQL Server and Synapse. Your key responsibilities will include designing and maintaining efficient data pipelines and ETL processes using SQL and Python, writing optimized queries for data manipulation, using Python libraries for data processing and visualization, performing EOD data aggregation and reporting, working on Azure Synapse Analytics for scalable data transformations, monitoring and managing database performance, collaborating with cross-functional teams, ensuring secure data handling and compliance with organizational policies, and debugging Unix-based scripts. To be successful in this role, you should have a Bachelors/Masters degree in Computer Science, IT, or a related field, along with 5-8 years of hands-on experience in data engineering and analytics. You must have a solid understanding of database architecture, experience in end-of-day reporting setups, and familiarity with cloud-based analytics platforms. This is a full-time, permanent position with day shift schedule and in-person work location.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Senior Generative AI Engineer, your primary role will involve conducting original research on generative AI models. You will focus on exploring model architecture, training methodologies, fine-tuning techniques, and evaluation strategies. It is essential to maintain a strong publication record in esteemed conferences and journals, demonstrating your valuable contributions to the fields of Natural Language Processing (NLP), Deep Learning (DL), and Machine Learning (ML). In addition, you will be responsible for designing and experimenting with multimodal generative models that incorporate various data types such as text, images, and other modalities to enhance AI capabilities. Your expertise will be crucial in developing autonomous AI systems that exhibit agentic behavior, enabling them to make independent decisions and adapt to dynamic environments. Leading the design, development, and implementation of generative AI models and systems will be a key aspect of your role. This involves selecting suitable models, training them on extensive datasets, fine-tuning hyperparameters, and optimizing overall performance. It is imperative to have a deep understanding of the problem domain to ensure effective model development and implementation. Furthermore, you will be tasked with optimizing generative AI algorithms to enhance their efficiency, scalability, and computational performance. Techniques such as parallelization, distributed computing, and hardware acceleration will be utilized to maximize the capabilities of modern computing architectures. Managing large datasets through data preprocessing and feature engineering to extract critical information for generative AI models will also be a crucial aspect of your responsibilities. Your role will also involve evaluating the performance of generative AI models using relevant metrics and validation techniques. By conducting experiments, analyzing results, and iteratively refining models, you will work towards achieving desired performance benchmarks. Providing technical leadership and mentorship to junior team members, guiding their development in generative AI, will also be part of your responsibilities. Documenting research findings, model architectures, methodologies, and experimental results thoroughly is essential. You will prepare technical reports, presentations, and whitepapers to effectively communicate insights and findings to stakeholders. Additionally, staying updated on the latest advancements in generative AI by reading research papers, attending conferences, and engaging with relevant communities is crucial to foster a culture of learning and innovation within the team. Mandatory technical skills for this role include strong programming abilities in Python and familiarity with frameworks like PyTorch or TensorFlow. In-depth knowledge of Deep Learning concepts such as CNN, RNN, LSTM, Transformers LLMs (BERT, GEPT, etc.), and NLP algorithms is required. Experience with frameworks like Langgraph, CrewAI, or Autogen for developing, deploying, and evaluating AI agents is also essential. Preferred technical skills include expertise in cloud computing, particularly with Google/AWS/Azure Cloud Platform, and understanding Data Analytics Services offered by these platforms. Hands-on experience with ML platforms like GCP: Vertex AI, Azure: AI Foundry, or AWS SageMaker is desirable. Strong communication skills, the ability to work independently with minimal supervision, and a proactive approach to escalate when necessary are also key attributes for this role. If you have a Master's or PhD degree in Computer Science and 6 to 8 years of experience with a strong record of publications in top-tier conferences and journals, this role could be a great fit for you. Preference will be given to research scholars from esteemed institutions like IITs, NITs, and IIITs.,

Posted 1 week ago

Apply

7.0 - 9.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Our Client in India is one of the leading providers of risk, financial services and business advisory, internal audit, corporate governance, and tax and regulatory services. Our Client was established in India in September 1993, and has rapidly built a significant competitive presence in the country. The firm operates from its offices in Mumbai, Pune, Delhi, Kolkata, Chennai, Bangalore, Hyderabad , Kochi, Chandigarh and Ahmedabad, and offers its clients a full range of services, including financial and business advisory, tax and regulatory. Our client has their client base of over 2700 companies. Their global approach to service delivery helps provide value-added services to clients. The firm serves leading information technology companies and has a strong presence in the financial services sector in India while serving a number of market leaders in other industry segments. Job Requirements Mandatory Skills Bachelor s or higher degree in Computer Science or a related discipline or equivalent (minimum 7+ years work experience). At least 6+ years of consulting or client service delivery experience on Azure Microsoft data engineering. At least 4+ years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases such as SQL server and data warehouse solutions such as Synapse/Azure Databricks, Microsoft Fabric Hands-on experience implementing data ingestion, ETL and data processing using Azure services: Fabric, onelake, ADLS, Azure Data Factory, Azure Functions, services in Microsoft Fabric etc. Minimum of 5+ years of hands-on experience in Azure and Big Data technologies such as Fabric, databricks, Python, SQL, ADLS/Blob, pyspark/SparkSQL. Minimum of 3+ years of RDBMS experience Experience in using Big Data File Formats and compression techniques. Experience working with Developer tools such as Azure DevOps, Visual Studio Team Server, Git, etc. Preferred Skills Technical Leadership & Demo Delivery: oProvide technical leadership to the data engineering team, guiding the design and implementation of data solutions. oDeliver compelling and clear demonstrations of data engineering solutions to stakeholders and clients, showcasing functionality and business value. oCommunicate fluently in English with clients, translating complex technical concepts into business-friendly language during presentations, meetings, and consultations. ETL Development & Deployment on Azure Cloud: oDesign, develop, and deploy robust ETL (Extract, Transform, Load) pipelines using Azure Data Factory (ADF), Azure Synapse Analytics, Azure Notebooks, Azure Functions, and other Azure services. oEnsure scalable, efficient, and secure data integration workflows that meet business requirements. oPreferably to have following skills Azure doc intelligence, custom app, blob storage oDesign and develop data quality frameworks to validate, cleanse, and monitor data integrity. oPerform advanced data transformations, including Slowly Changing Dimensions (SCD Type 1 and Type 2), using Fabric Notebooks or Databricks. oPreferably to have following skills Azure doc intelligence, custom app, blob storage Microsoft Certifications: oHold relevant role-based Microsoft certifications, such as: DP-203: Data Engineering on Microsoft Azure AI-900: Microsoft Azure AI Fundamentals. oAdditional certifications in related areas (e.g., PL-300 for Power BI) are a plus. Azure Security & Access Management: oStrong knowledge of Azure Role-Based Access Control (RBAC) and Identity and Access Management (IAM). oImplement and manage access controls, ensuring data security and compliance with organizational and regulatory standards on Azure Cloud. Additional Responsibilities & Skills: oTeam Collaboration: Mentor junior engineers, fostering a culture of continuous learning and knowledge sharing within the team. oProject Management: Oversee data engineering projects, ensuring timely delivery within scope and budget, while coordinating with cross-functional teams. oData Governance: Implement data governance practices, including data lineage, cataloging, and compliance with standards like GDPR or CCPA. oPerformance Optimization: Optimize ETL pipelines and data workflows for performance, cost-efficiency, and scalability on Azure platforms. oCross-Platform Knowledge: Familiarity with integrating Azure services with other cloud platforms (e.g., AWS, GCP) or hybrid environments is an added advantage. Soft Skills & Client Engagement: oExceptional problem-solving skills with a proactive approach to addressing technical challenges. oStrong interpersonal skills to build trusted relationships with clients and stakeholders. Ability to manage multiple priorities in a fast-paced environment, ensuring high-quality deliverables.

Posted 1 week ago

Apply

6.0 - 11.0 years

8 - 12 Lacs

Chennai

Work from Office

Skills : Azure/AWS, Synapse, Fabric, PySpark, Databricks, ADF, Medallion Architecture, Lakehouse, Data Warehousing Experience : 6+ Years Locations : Chennai, Bangalore, Pune, Coimbatore Work from Office

Posted 1 week ago

Apply

1.0 - 7.0 years

12 - 14 Lacs

Mumbai, Maharashtra, India

On-site

Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions

Posted 1 week ago

Apply

1.0 - 7.0 years

12 - 14 Lacs

Gurgaon, Haryana, India

On-site

Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions

Posted 1 week ago

Apply

1.0 - 7.0 years

12 - 14 Lacs

Hyderabad, Telangana, India

On-site

Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions

Posted 1 week ago

Apply

8.0 - 12.0 years

6 - 11 Lacs

Bengaluru, Karnataka, India

On-site

At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions

Posted 1 week ago

Apply

6.0 - 7.0 years

6 - 7 Lacs

Bengaluru, Karnataka, India

On-site

Setting up data pipelines using Azure Synapse Pipelines or Azure Data Factory Developing ETL jobs using stored procedures and Data Flow Creating data validation and reconciliation jobs using Synapse Pipeline Skills: Azure Data Factory, Azure Synapse, SQL Good to have: PySpark

Posted 1 week ago

Apply

1.0 - 6.0 years

4 - 10 Lacs

Gurgaon, Haryana, India

On-site

Job Location : Hyderabad / Bangalore / Chennai / Kolkata / Noida/ Gurgaon / Pune / Indore / Mumbai At least 5+ years of relevant hands on development experience as Azure Data Engineering role Proficient inAzure technologieslike ADB, ADF, SQL(capability of writing complex SQL queries), ADB, PySpark, Python, Synapse, Delta Tables, Unity Catalog Hands on in Python, PySpark or Spark SQL Hands on in Azure Analytics and DevOps Taking part in Proof of Concepts (POCs) and pilot solutions preparation Ability to conduct data profiling, cataloguing, and mapping for technical design and construction of technical data flows Experience in business processing mapping of data and analytics solutions At DXC Technology, we believe strong connections and community are key to our success. Our work model prioritizes in-person collaboration while offering flexibility to support wellbeing, productivity, individual work styles, and life circumstances. We re committed to fostering an inclusive environment where everyone can thrive. Recruitment fraud is a scheme in which fictitious job opportunities are offered to job seekers typically through online services, such as false websites, or through unsolicited emails claiming to be from the company. These emails may request recipients to provide personal information or to make payments as part of their illegitimate recruiting process. DXC does not make offers of employment via social media networks and DXC never asks for any money or payments from applicants at any point in the recruitment process, nor ask a job seeker to purchase IT or other equipment on our behalf. More information on employment scams is availablehere .

Posted 1 week ago

Apply

8.0 - 12.0 years

15 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Work from Office

Role: Data Engineer Location: Chennai, Bangalore, Hyderabad Experience: 8+ Work Mode: 5 days Work from Office only Responsibilities Knowledge to ingest, cleanse, transform and load data from varied data sources in the above Azure Services Strong knowledge of Medallion architecture Consume data from source with different file format such as XML, CSV, Excel, Parquet, JSON Create Linked Services with different type of sources. Create automated flow for pipeline which can consume data i.e may receive file via email or Share point. Strong problem-solving skill such as backtracking of dataset, data analysis etc. Strong Knowledge of in advanced SQL techniques for carrying out data analysis as per client requirement. Skills The candidates need to understand different data architecture patterns and parallel data processing. S/he should be proficient in using the following services to create data processing solutions: Azure Data Factory Azure Data Lake Storage Azure Databricks Strong Knowledge in PySpark SQL Good programming skill in Python Desired Skills Ability to query the data from serverless SQL Pool in Azure Synapse Analytics. Knowledge of Azure DevOps. Knowledge to configure any dataset with Vnet, Subnet Networks Knowledge of Microsoft Entra ID, to create App registration for single and multitenant for security purpose.

Posted 1 week ago

Apply

12.0 - 15.0 years

20 - 32 Lacs

Hyderabad

Hybrid

JOB DESCRIPTION Position: Lead Software / Platform Engineer II (Magnet) Job Location: Pune, India Work Arrangement: Hybrid Line of Business: Sub-Line of Business: EMEA Technology Department: Tech Org: GOSC Operations 41165 Assignment Category Grade: Full-time GG12 Technical Skills: .Net and up, Java, Microservices, UI/ReactJS, Databricks & Synapse, AS 400 Hiring Manager: Rajdip Pal / Fernando Garcia-Monteavaro Job Description and Requirements Role Value Proposition The MetLife EMEA Technology organization is evolving to enable MetLifes New Frontier strategy. With a strong vision in place, we are a function focused on driving digital technology strategies for key technology functions within MetLife including. In partnership with our business leaders, we develop and deliver seamless technology experiences to our employees across the entire employee lifecycle. Our vision and mission is to create innovative, transformative and contemporary technology solutions to empower our leaders and employees so they can focus on what matters most, our customers. We are technologists with strong business acumen focused on developing our talent to continually transform and innovate. As part of Tech Talent Transformation (T3) agenda, MetLife is establishing a Technology Center in India. This technology center will perform as an integrated organization between onshore, offshore, and strategic vendor partners in an Agile delivery model. We are seeking a leader who can spread the MetLife culture and develop a local team culture, partnering with HR Leaders to attract, develop and retain talents across the organization. This role will also be involved in project delivery, bringing technical skills and execution experience. As a Magnet, you are the human anchor of your team fostering a culture of trust, inclusion, and MetLife belonging feeling. This role is vital for the well-being of the team. Tasks management or delivery outcomes of the team are managed within ADM teams aligned on specific capabilities or product. However, when delivery issues arise due to local challenges in the Technology Center, you are expected to step in, facilitate discussions to align with local stakeholders, and support escalation or resolution as needed. You maintain close contact with local leadership or operational teams in the countries, being aware of local priorities. Strong technical background is also required as you will be involved in technical work performed the market covered by the role, integrated with related ADM teams. Key Relationships: Internal Stake Holder – EMEA ART Leader, ART Leadership team, India EMEA Technology AVP, and Business process Owners for EMEA Technology. Key Responsibilities: Develop strong Technology capabilities to support EMEA agenda adopting Agile ways of Working in the software delivery lifecycle (Architecture, Design, Development, Testing & Production). Partner with internal business process owners, technical team members, and senior management throughout the project life cycle. Act as the first point of contact for team members on well-being, interpersonal dynamics, and professional development. Foster an inclusive team culture that reflects the company’s values. Support conflict resolution and encourage open, honest communication. Identify and escalate human-related challenges to People & Culture teams when needed. In case of delivery issues tied to team dynamics, collaborate with relevant stakeholders to understand the context and contribute to resolution. Stay connected with local or operational teams to understand business priorities and team-specific realities. Help new joiners integrate into the team from a cultural and human perspective. Serve as a bridge between individuals and leadership, including in-country teams regarding people-related matters. Education: Bachelor of Computer Science or equivalent. Technical Stack: Competencies - Facilitation Level 3- Working Experience Tech Stack (subject to role) - Development Frameworks and Languages: .Net and up, Java, Microservices, UI/ReactJS, Databricks & Synapse, AS 400 Data Management: Database (SQL Server), APIs (APIC, APIM), REST API Development & Delivery Methods: Agile (Scaled Agile Framework) DevOps and CI/CD: Containers (Azure Kubernetes), CI/CD (Azure DevOps, Git, SonarQube), Scheduling Tools (Azure Scheduler) Development Tools & Platforms: IDE (GitHub Copilot, VSCODE), Cloud Platform (Microsoft Azure) Security and Monitoring: Secure Coding (Veracode), Authentication/Authorization (CA SiteMinder, MS Entra, PingOne), Log & Monitoring (Azure AppInsights, Elastic) Writing and executing automated tests -- Using Java and Javscript, Selenium, and using test automation framework Other Critical requirements – Proficiency in multiple programming languages and frameworks. Strong problem-solving skills. Experience with agile methodologies and continuous integration/continuous deployment (CI/CD). Ability to work in a team and communicate effectively. Exposure to conflict resolution, mediation, or active listening practices. Understanding of psychological safety and team health dynamics. Familiarity with diversity, equity, and inclusion principles. Experience working across functions or cultures is a plus. Prior experience in mentoring, team facilitation, coaching, or peer support roles is a plus Soft Skills: Excellent problem-solving, communication, and stakeholder management skills. Ability to balance technical innovation with business value delivery. Business acumen: A level-headed, clear communicator to gain detailed level of understanding of organizational business requirements and business dynamics. Self-Motivated and able to work independently. Attention to detail Collaborative Team Player Decisive Supportive Passionate Professional Accountable

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

hyderabad, telangana

On-site

As a member of the Providence team, you will be part of one of the largest not-for-profit healthcare systems in the US, dedicated to providing high-quality and compassionate healthcare to all individuals. Our vision, "Health for a better world," drives us to ensure that health is a fundamental human right. With a network of 51 hospitals, 1,000+ care clinics, senior services, supportive housing, and various health and educational services, we aim to offer affordable quality care and services to everyone. Providence India is spearheading a transformative shift in the healthcare ecosystem towards Health 2.0. With a focus on healthcare technology and innovation, our India center will play a crucial role in driving the digital transformation of health systems to enhance patient outcomes, caregiver efficiency, and the overall business operations of Providence on a large scale. Joining our Technology Engineering and Ops (TEO) team, you will contribute to the foundational infrastructure that enables our caregivers, patients, physicians, and community technology partners to fulfill our mission. Your role will involve working on Azure Services such as Azure Data Factory, Azure Databricks, and Log Analytics to integrate structured and unstructured data from multiple systems, develop reporting solutions for TEO platforms, build visualizations with enterprise-level datasets, and collaborate with data engineers and service line owners to manage report curation and data modeling. In addition to your responsibilities, you will contribute to the creation of a centralized enterprise warehouse for all infrastructure and networking data, implement API Governance and Standards for Power BI Reporting, and build SharePoint intake forms with power automate and bi-directional ADO integration. You will work closely with service teams to determine project engagement and priority, create scripts to support enhancements, collaborate with external vendors on API integrations, and ensure data analysis and integrity with the PBI DWH and Reporting teams. To excel in this role, we are looking for individuals with a minimum of 2+ years of experience in BI Development and Data Engineering, along with 1+ years of experience in Azure cloud technologies. Proficiency in Power BI, Azure Data Factory, Azure Databricks, Synapse, complex SQL code writing, cloud-native deployment with CI/CD Pipelines, troubleshooting skills, effective communication, and a Bachelor's Degree in Computer Science, Business Management, or IS are essential. If you thrive in an Agile environment, possess excellent interpersonal skills, and are ready to contribute to the future of healthcare, we encourage you to apply and be a part of our mission towards "Health for a better world.",

Posted 1 week ago

Apply

8.0 - 10.0 years

8 - 14 Lacs

Bengaluru, Karnataka, India

On-site

General Skills Good Interpersonal skill and ability to manage multiple tasks with enthusiasm. Interact with clients to understand the requirements. 8 to 10 years of total IT experience, with min 3+ Power BI Technical Skills Understand business requirements in MSBI context and design data models to transform raw data into meaningful insights. Awareness of star and snowflake schemas in DWH and DWH concepts Should be familiar and experienced in T-SQL Have good knowledge and experience in SSIS (ETL) Creation of Dashboards Visual Interactive Reports usingPower BI Extensive experience in both Power BI Service Power BI On-Premise environment Create relationships between data sources and develop data models accordingly Experience in implementing Tabular model implementation and row level data security. Experience in writing and optimizing DAX queries. Experience in Power Automate Flow Performance tuning and optimization of Power BI reports Good understanding of Data warehouse concepts. Knowledge of Microsoft Azure analytics is a plus. Good to have Azure BI skills (ADF, ADB, Synapse) Good UI/UX experience / knowledge Knowledge in Tabular Models Knowledge in Paginated Reports

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

bihar

On-site

Microsoft Silicon, Cloud Hardware, and Infrastructure Engineering (SCHIE) is the team responsible for powering Microsoft's expanding Cloud Infrastructure and driving the Intelligent Cloud mission. SCHIE plays a crucial role in delivering core infrastructure and foundational technologies for over 200 online businesses globally, including Bing, MSN, Office 365, Xbox Live, Teams, OneDrive, and the Microsoft Azure platform. As part of our team, you will contribute to server and data center infrastructure, security and compliance, operations, globalization, and manageability solutions. We are committed to smart growth, high efficiency, and providing a trusted experience to our customers and partners worldwide. We are seeking passionate and high-energy engineers to join us on this mission. We are currently looking for a motivated software engineer with a strong interest in cloud-scale distributed systems to work on building and maintaining cloud services and software stacks. The primary focus will be on monitoring and managing cloud hardware, ensuring the health, performance, and availability of the cloud infrastructure. This role offers the opportunity to be part of a dynamic team at the forefront of innovation within Microsoft, contributing to the development of cutting-edge hardware solutions that power Azure and enhance our cloud infrastructure. Responsibilities: - Design, develop, and operate large-scale, low-latency, high-throughput cloud services. - Monitor, diagnose, and repair service health and performance in production environments. - Conduct A/B testing and analysis, establish baseline metrics, set incremental targets, and validate against those targets continuously. - Utilize AI/copilot tooling for development and operational efficiency, driving improvements to meet individual and team-level goals. - Perform data analysis using various analytical tools and interpret results to provide actionable recommendations. - Define and measure the impact of requested analytics and reporting features through quantitative measures. - Collaborate with internal peer teams and external partners to ensure highly available, secure, accurate, and actionable results based on hardware health signals, policies, and predictive analytics. Qualifications: - Academic qualifications: B.S. in Computer Science, M.S. in Computer Science, or Ph.D. in Computer Science or Electrical Engineering with at least 1 year of development experience. - Proficiency in programming languages such as C# or other Object-oriented languages, C, Python, and scripting languages. - Strong understanding of Computer Science fundamentals including algorithms, data structures, object-oriented design, multi-threading, and distributed systems. - Excellent problem-solving and design skills with a focus on quality, performance, and service excellence. - Effective communication skills for collaboration and customer/partner correspondence. - Experience with Azure services and database query languages like SQL/kusto is desired but optional. - Familiarity with AI copilot tooling and basic knowledge of LLM models and RAG is desired but optional. Join us in shaping the future of cloud infrastructure and be part of an exciting and innovative team at Microsoft SCHIE. #azurehwjobs #CHIE #HHS,

Posted 1 week ago

Apply

9.0 - 13.0 years

0 Lacs

chennai, tamil nadu

On-site

You are an experienced Data Engineering Manager responsible for leading a team of 10+ engineers in Chennai, Tamil Nadu, India. Your primary role is to build scalable data marts and Power BI dashboards to measure marketing campaign performance. Your deep expertise in Azure, Microsoft Fabric, and Power BI, combined with strong leadership skills, enables you to drive data initiatives that facilitate data-driven decision-making for the marketing team. Your key responsibilities include managing and mentoring the data engineering and BI developer team, overseeing the design and implementation of scalable data marts and pipelines, and leading the development of insightful Power BI dashboards. You collaborate closely with marketing and business stakeholders to gather requirements, align on metrics, and deliver actionable insights. Additionally, you lead project planning, prioritize analytics projects, and ensure timely and high-impact outcomes using Agile methodologies. You are accountable for ensuring data accuracy, lineage, and compliance through robust validation, monitoring, and governance practices. You promote the adoption of modern Azure/Microsoft Fabric capabilities and industry best practices in data engineering and BI. Cost and resource management are also part of your responsibilities, where you optimize infrastructure and licensing costs, as well as manage external vendors or contractors if needed. Your expertise in Microsoft Fabric, Power BI, Azure (Data Lake, Synapse, Data Factory, Azure Functions), data modeling, data pipeline development, SQL, and marketing analytics is crucial for success in this role. Proficiency in Agile project management, data governance, data quality monitoring, Git, stakeholder management, and performance optimization is also required. Your role involves leading a team that focuses on developing scalable data infrastructure and analytics solutions to empower the marketing team with campaign performance measurement and optimization. This permanent position requires 9 to 12 years of experience in the Data Engineering domain. If you are passionate about driving data initiatives, leading a team of engineers, and collaborating with stakeholders to deliver impactful analytics solutions, this role offers an exciting opportunity to make a significant impact in the marketing analytics space at the Chennai location.,

Posted 1 week ago

Apply

8.0 - 12.0 years

0 Lacs

haryana

On-site

You will be responsible for developing applications using various Microsoft and web development technologies such as ASP.Net, C#, MVC, Web Forms, Angular, SQL Server, T-SQL, and Microservices. Your expertise in big data technologies like Hadoop, Spark, Hive, Python, Databricks, etc. will be crucial for this role. With a Bachelors Degree in Computer Science or equivalent experience through higher education, you should have at least 8 years of experience in Data Engineering and/or Software Engineering. Your strong coding skills along with knowledge of infrastructure as code and automating production data and ML pipelines will be highly valued. You should be proficient in working on on-prem to cloud migration, particularly in Azure, and have hands-on experience with Azure PaaS offerings such as Synapse, ADLS, DataBricks, Event Hubs, CosmosDB, Azure ML, etc. Experience in building, governing, and scaling data warehouses/lakes/lake houses is essential for this role. Your expertise in developing and tuning stored procedures and T-SQL scripting in SQL Server, along with familiarity with various .Net development tools and products, will contribute significantly to the success of the projects. You should be adept with agile software development lifecycle and DevOps principles to ensure efficient project delivery.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

karnataka

On-site

As a Developer (Japanese-speaking), you will be responsible for supporting projects in the Japan region by utilizing your technical expertise and Japanese language proficiency. This hands-on role will primarily focus on backend development, data engineering, and cloud technologies. Candidates with prior experience working in Japan or those looking to relocate from Japan are highly desired. Your key responsibilities will include designing and developing ETL/ELT pipelines using Azure or equivalent cloud platforms, collaborating with Japanese-speaking stakeholders and internal teams, working with Azure Data Factory, Synapse, Data Lake, and Power BI for data integration and reporting, as well as participating in technical discussions, requirement gathering, and solution design. You will also be expected to ensure timely delivery of project milestones while upholding code quality and documentation standards. To excel in this role, you should possess 3-5 years of experience in data engineering or backend development, proficiency in SQL, Python, and ETL/ELT processes, hands-on experience with Azure Data Factory, Synapse, Data Lake, and Power BI, a strong understanding of cloud architecture (preferably Azure, but AWS/GCP are acceptable), and at least JLPT N3 certification with a preference for N2 or N1 to effectively communicate in Japanese. A Bachelor's degree in Computer Science, Engineering, or a related field is also required. Preferred candidates for this position include individuals who have worked in Japan for at least 2 years and are now relocating to India, or those currently based in Japan and planning to relocate within a month. While Bangalore is the preferred location, Kochi is also acceptable for relocation. Candidates from other regions, such as Noida or Gurgaon, will not be considered unless relocation to the specified locations is confirmed. The interview process will consist of a technical evaluation conducted by Suresh Varghese in the first round, followed by a Japanese language proficiency assessment. At least one round of the interview must be conducted face-to-face for shortlisted candidates to assess their suitability for the role.,

Posted 1 week ago

Apply
Page 1 of 5
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies