Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 16.0 years
0 Lacs
karnataka
On-site
The Analytics lead role within the Enterprise Data team requires an expert Power BI lead with profound data visualization experience, strong proficiency in DAX, SQL, and data modeling techniques. This position offers a unique opportunity to contribute to cutting-edge business analytics using advanced BI tools like cloud-based databases and self-service analytics, aligning with the company's vision of digital transformation. Responsibilities: - Lead and oversee a team of Power BI Developers, providing guidance and support in their daily tasks. - Design data visualization models and solutions within the Microsoft Azure ecosystem, including Power BI, Azure Synapse Analytics, MSFT Fabric, and Azure Machine Learning. - Develop strategies for analytics, reporting, and governance to ensure scalability, reliability, and security. - Collaborate with business stakeholders to define analytics and reporting strategies. - Ensure solutions are aligned with organizational objectives, compliance requirements, and technological advancements. - Act as a subject matter expert in Analytics services, mentoring senior/junior Power BI Developers. - Evaluate emerging technologies and analytical capabilities. - Provide guidance on cost optimization, performance tuning, and best practices in Azure cloud environments. Stakeholder Collaboration: - Work closely with business stakeholders, product managers, and data scientists to understand business goals and translate them into technical solutions. - Collaborate with DevOps, engineering, and operations teams to implement CI/CD pipelines for smooth deployment of analytical solutions. Governance and Security: - Define and implement policies for data governance, quality, and security, ensuring compliance with relevant standards such as GDPR and HIPAA. - Optimize solutions for data privacy, resilience, and disaster recovery. Qualifications: Required Skills and Experience: - Proficiency in Power BI and related technologies including MSFT Fabric, Azure SQL Database, Azure Synapse, Databricks, and other visualization tools. - Hands-on experience with Power BI, machine learning, and AI services in Azure. - Strong data visualization skills and experience. - 12+ years of Power BI Development experience, with a track record of designing high-quality models and dashboards. - 8+ years of experience using Power BI Desktop, DAX, Tabular Editor, and related tools. - Comprehensive understanding of data modeling, administration, and visualization. - Excellent leadership and communication skills. - Relevant certifications in Power BI, machine learning, AI, or enterprise architecture preferred. Key Competencies: - Expertise in data visualization tools like Power BI or Tableau. - Ability to create semantic models for reporting. - Familiarity with Microsoft Fabric technologies. - Strong understanding of data governance, compliance, and security frameworks. - Experience with DevOps and Infrastructure as Code tools. - Proven ability to drive innovation in data strategy and cloud solutions. - In-depth knowledge of business intelligence workflows and database design. - Experience in cloud-based data integration tools and agile development techniques. Location: DGS India - Mumbai - Thane Ashar IT Park Brand: Dentsu Time Type: Full time Contract Type: Permanent,
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
Join us as a Principal Engineer. This challenging role will involve designing and engineering software with a primary focus on customer or user experience. You will actively contribute to our architecture, design, and engineering center of excellence, collaborating to enhance the bank's software engineering capability. This role, offered at the vice president level, provides valuable stakeholder exposure, the opportunity to build and leverage relationships, and a chance to refine your technical skills. As a Principal Engineer, your responsibilities will include creating exceptional customer outcomes through innovative engineering solutions for both existing and new challenges. You will work with software engineers to produce and prototype innovative ideas, collaborate with domain and enterprise architects to validate and incorporate relevant architectures, and lead functional engineering teams. Your role will involve managing end-to-end product implementations, driving demos and stakeholder engagement across platforms, and focusing on automating build, test, and deployment activities. Additionally, you will play a key part in developing the discipline of software engineering within the organization. You will also be responsible for defining, creating, and overseeing engineering and design solutions with a strong emphasis on end-to-end automation, simplification, resilience, security, performance, scalability, and reusability. Working within a platform or feature team, you will collaborate with software engineers to design and engineer complex software, scripts, and tools that enable the delivery of bank platforms, applications, and services. Your role will involve defining and developing architecture models and roadmaps for application and software components, ensuring they meet business and technical requirements, and driving consistent usability across products and domains. You will design, test, and implement working code while applying Agile methods and DevOps techniques to software development. The skills required for this role include significant experience in software engineering, software or database design and architecture, and working within a DevOps and Agile framework. You should possess an expert understanding of the latest market trends, technologies, and tools, along with demonstrable experience in implementing programming best practices, particularly related to scalability, automation, virtualization, optimization, availability, and performance. Additionally, you should have strong experience in gathering business requirements, translating them into technical user stories, and leading functional solution design, especially within the banking domain and CRM (MS Dynamics). Proficiency in PowerApps, D365 (including Custom Pages), and frontend configuration, as well as familiarity with Power BI (SQL, DAX, Power Query, Data Modeling, RLS, Azure, Lakehouse, Python, Spark SQL) is required. A background in designing or implementing APIs and the ability to quickly understand and translate product and business requirements into technical solutions are also essential for this role.,
Posted 2 days ago
4.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Power BI + Microsoft Fabric Lead with over 10 years of experience, you will play a key role in leading the strategy and architecture for BI initiatives. Your responsibilities will include designing and delivering end-to-end Power BI and Microsoft Fabric solutions, collaborating with stakeholders to define data and reporting goals, and driving the adoption of best practices and performance optimization. Your expertise in Power BI, including DAX, Power Query, and Advanced Visualizations, will be essential for the success of high-impact BI initiatives. As a Power BI + Microsoft Fabric Developer with 4+ years of experience, you will be responsible for developing dashboards and interactive reports using Power BI, building robust data models, and implementing Microsoft Fabric components like Lakehouse, OneLake, and Pipelines. Working closely with cross-functional teams, you will gather and refine requirements to ensure high performance and data accuracy across reporting solutions. Your hands-on experience with Microsoft Fabric tools such as Data Factory, OneLake, Lakehouse, and Pipelines will be crucial for delivering effective data solutions. Key Skills Required: - Strong expertise in Power BI (DAX, Power Query, Advanced Visualizations) - Hands-on experience with Microsoft Fabric (Data Factory, OneLake, Lakehouse, Pipelines) - Solid understanding of data modeling, ETL, and performance tuning - Ability to collaborate effectively with business and technical teams Joining our team will provide you with the opportunity to work with cutting-edge Microsoft technologies, lead high-impact BI initiatives, and thrive in a collaborative and innovation-driven environment. We offer a competitive salary and benefits package to reward your expertise and contributions. If you are passionate about leveraging Power BI and Microsoft Fabric tools to drive data-driven insights and solutions, we invite you to apply for this full-time position. Application Question(s): - What is your current and expected CTC - What is your notice period If you are serving your notice period, then what is your Last Working Day (LWD) Experience Required: - Power BI: 4 years (Required) - Microsoft Fabrics: 4 years (Required) Work Location: In person,
Posted 5 days ago
8.0 - 13.0 years
18 - 22 Lacs
Hyderabad, Bengaluru
Work from Office
To Apply - Mandatory to submit Details via Google Form - https://forms.gle/cCa1WfCcidgiSTgh8 Position : Senior Data Engineer - Total 8+ years Required Relevant 6+ years in Databricks, AWS, Apache Spark & Informatica (Required Skills) As a Senior data Engineer in our team, youll build and nurture positive working relationships with teams and clients with the intention to exceed client expectations Seeking experienced data Engineer to design, implement, and maintain robust data pipelines and analytics solutions using databricks & AWS services. The ideal candidate will have a strong background in data services, big data technologies, and programming languages. Role & responsibilities Technical Leadership: Guide and mentor teams in designing and implementing Databricks solutions. Architecture & Design: Develop scalable data pipelines and architectures using Databricks Lakehouse. Data Engineering: Lead the ingestion and transformation of batch and streaming data. Performance Optimization: Ensure efficient resource utilization and troubleshoot performance bottlenecks. Security & Compliance: Implement best practices for data governance, access control, and compliance. Collaboration: Work closely with data engineers, analysts, and business stakeholders. Cloud Integration: Manage Databricks environments on Azure, AWS, or GCP. Monitoring & Automation: Set up monitoring tools and automate workflows for efficiency. Qualifications: 6+ years of experience in Databricks, AWS and 4+ Apache Spark, and Informatica. Excellent problem-solving and leadership skills. Good to have these skills 1. Design and implement scalable, high-performance data pipelines using AWS services 2. Develop and optimize ETL processes using AWS Glue, EMR, and Lambda 3. Build and maintain data lakes using S3 and Delta Lake 4. Create and manage analytics solutions using Amazon Athena and Redshift 5. Design and implement database solutions using Aurora, RDS, and DynamoDB 6. Develop serverless workflows using AWS Step Functions 7. Write efficient and maintainable code using Python/PySpark, and SQL/PostgrSQL 8. Ensure data quality, security, and compliance with industry standards 9. Collaborate with data scientists and analysts to support their data needs 10. Optimize data architecture for performance and cost-efficiency 11. Troubleshoot and resolve data pipeline and infrastructure issues Preferred candidate profile (Good to have) 1. Bachelors degree in computer science, Information Technology, or related field 2. Relevant years of experience as a Data Engineer, with at least 60% of experience focusing on AWS 3. Strong proficiency in AWS data services: Glue, EMR, Lambda, Athena, Redshift, S3 4. Experience with data lake technologies, particularly Delta Lake 5. Expertise in database systems: Aurora, RDS, DynamoDB, PostgreSQL 6. Proficiency in Python and PySpark programming 7. Strong SQL skills and experience with PostgreSQL 8. Experience with AWS Step Functions for workflow orchestration Technical Skills: Good to have - AWS Services: Glue, EMR, Lambda, Athena, Redshift, S3, Aurora, RDS, DynamoDB , Step Functions - Big Data: Hadoop, Spark, Delta Lake - Programming: Python, PySpark - Databases: SQL, PostgreSQL, NoSQL - Data Warehousing and Analytics - ETL/ELT processes - Data Lake architectures - Version control: Git - Agile methodologies
Posted 1 week ago
2.0 - 5.0 years
8 - 15 Lacs
Gurugram
Remote
Job Description: We are looking for a talented and driven MS Fabric Developer / Data Analytics Engineer with expertise in Microsoft Fabric ecosystem, data transformation, and analytics. The ideal candidate will be responsible for designing, developing, and optimizing data pipelines, working with real-time analytics, and implementing best practices in data modeling and reporting. Key Responsibilities: Work with MS Fabric components , including: Data Lake OneLake Lakehouse Warehouse Real-Time Analytics Develop and maintain data transformation scripts using: Power Query T-SQL Python Build scalable and efficient data models and pipelines for analytics and reporting Collaborate with BI teams and business stakeholders to deliver data-driven insights Implement best practices for data governance, performance tuning, and storage optimization Support real-time and near real-time data streaming and transformation tasks Required Skills: Hands-on experience with MS Fabric and associated data services Strong command over Power Query , T-SQL , and Python for data transformations Experience working in modern data lakehouse and real-time analytics environments Good to Have: DevOps knowledge for automating deployments and managing environments Familiarity with Azure services and cloud data architecture Understanding of CI/CD pipelines for data projects
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Senior Data Engineer (Azure MS Fabric) at Srijan Technologies PVT LTD, located in Gurugram, Haryana, India, you will be responsible for designing and developing scalable data pipelines using Microsoft Fabric. Your role will involve working on both batch and real-time ingestion and transformation, integrating with Azure Data Factory for smooth data flow, and collaborating with data architects to implement governed Lakehouse models in Microsoft Fabric. You will be expected to monitor and optimize the performance of data pipelines and notebooks in Microsoft Fabric, applying tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery. Collaboration with cross-functional teams, including BI developers, analysts, and data scientists, is essential to gather requirements and build high-quality datasets. Additionally, you will need to document pipeline logic, lakehouse architecture, and semantic layers clearly, following development standards and contributing to internal best practices for Microsoft Fabric-based solutions. To excel in this role, you should have at least 5 years of experience in data engineering within the Azure ecosystem, with hands-on experience in Microsoft Fabric, Lakehouse, Dataflows Gen2, and Data Pipelines. Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2 is required, along with a strong command of SQL, PySpark, and Python applied to data integration and analytical workloads. Experience in optimizing pipelines and managing compute resources for cost-effective data processing in Azure/Fabric is also crucial. Preferred skills for this role include experience in the Microsoft Fabric ecosystem, familiarity with OneLake, Delta Lake, and Lakehouse principles, expert knowledge of PySpark, strong SQL, and Python scripting within Microsoft Fabric or Databricks notebooks, and understanding of Microsoft Purview, Unity Catalog, or Fabric-native tools for metadata, lineage, and access control. Exposure to DevOps practices for Fabric and Power BI, as well as knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines, would be considered a plus. If you are passionate about developing efficient data solutions in a collaborative environment and have a strong background in data engineering within the Azure ecosystem, this role as a Senior Data Engineer at Srijan Technologies PVT LTD could be the perfect fit for you. Apply now to be a part of a dynamic team driving innovation in data architecture and analytics.,
Posted 1 week ago
6.0 - 11.0 years
8 - 12 Lacs
Chennai
Work from Office
Skills : Azure/AWS, Synapse, Fabric, PySpark, Databricks, ADF, Medallion Architecture, Lakehouse, Data Warehousing Experience : 6+ Years Locations : Chennai, Bangalore, Pune, Coimbatore Work from Office
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
haryana
On-site
As a Senior Data Engineer (Azure MS Fabric) at Srijan Technologies PVT LTD, located in Gurugram, Haryana, India, you will be responsible for designing and developing scalable data pipelines using Microsoft Fabric. Your primary focus will be on developing and optimizing data pipelines, including Fabric Notebooks, Dataflows Gen2, and Lakehouse architecture for both batch and real-time ingestion and transformation. You will collaborate with data architects and engineers to implement governed Lakehouse models in Microsoft Fabric, ensuring data solutions are performant, reusable, and aligned with business needs and compliance standards. Monitoring and improving the performance of data pipelines and notebooks in Microsoft Fabric will be a key aspect of your role. You will apply tuning strategies to reduce costs, improve scalability, and ensure reliable data delivery across domains. Working closely with BI developers, analysts, and data scientists, you will gather requirements and build high-quality datasets to support self-service BI initiatives. Additionally, documenting pipeline logic, lakehouse architecture, and semantic layers clearly will be essential. Your experience with Lakehouses, Notebooks, Data Pipelines, and Direct Lake in Microsoft Fabric will be crucial in delivering reliable, secure, and efficient data solutions that integrate with Power BI, Azure Synapse, and other Microsoft services. You should have at least 5 years of experience in data engineering within the Azure ecosystem, with hands-on experience in Microsoft Fabric components such as Lakehouse, Dataflows Gen2, and Data Pipelines. Proficiency in building and orchestrating pipelines with Azure Data Factory and/or Microsoft Fabric Dataflows Gen2 is required. A strong command of SQL, PySpark, Python, and experience in optimising pipelines for cost-effective data processing in Azure/Fabric are necessary. Preferred skills include experience in the Microsoft Fabric ecosystem, familiarity with OneLake, Delta Lake, and Lakehouse principles, expert knowledge of PySpark, strong SQL, and Python scripting within Microsoft Fabric or Databricks notebooks, as well as understanding of Microsoft Purview or Unity Catalog. Exposure to DevOps practices for Fabric and Power BI, and knowledge of Azure Databricks for Spark-based transformations and Delta Lake pipelines would be advantageous.,
Posted 1 week ago
4.0 - 6.0 years
12 - 16 Lacs
Bangalore Rural, Bengaluru
Work from Office
Data Engineer (Microsoft Fabric & Lakehouse), PySpark Data Lakehouse architectures,cloud platforms (Azure, AWS), on-prem databases, SaaS platforms Salesforce, Workday), REST/OpenAPI-based APIs,data governance, lineage, RBAC principles,PySpark, SQL
Posted 2 weeks ago
5.0 - 15.0 years
0 Lacs
maharashtra
On-site
En Derevo empoderamos a las empresas y las personas, liberando el valor de los datos en las organizaciones. Con ms de 15 aos de experiencia, diseamos soluciones de datos e IA de punta a punta, desde la integracin en arquitecturas modernas hasta la implementacin de modelos inteligentes en procesos clave del negocio. Buscamos tu talento como Data Engineer (MS Fabric)!! Es importante que vivas en Mxico o Colombia. Como Data Engineer en Derevo, tu misin ser clave para crear e implementar arquitecturas modernas de datos con alta calidad, impulsando soluciones analticas basadas en tecnologas de Big Data. Disears, mantendrs y optimizars sistemas de multiprocesamiento paralelo, aplicando las mejores prcticas de almacenamiento y gestin en data warehouses, data lakes y lakehouses. Sers el apasionado que recolecta, procesa, limpia y orquesta grandes volmenes de datos, entendiendo modelos estructurados y semiestructurados, para integrar y transformar mltiples fuentes con eficacia. Definirs la estrategia ptima segn objetivos de negocio y requerimientos tcnicos, convirtiendo problemas complejos en soluciones alcanzables que ayuden a nuestros clientes a tomar decisiones basadas en datos. Te integrars al proyecto, sus sprints y ejecutars las actividades de desarrollo aplicando siempre las mejores prcticas de datos y las tecnologas que implementamos. Identificars requerimientos y definirs el alcance, participando en sprint planning y sesiones de ingeniera con una visin de consultor que aporte valor extra. Colaborars proactivamente en workshops y reuniones con el equipo interno y con el cliente. Clasificars y estimars actividades bajo metodologas giles (picas, features, historias tcnicas/usuario) y dars seguimiento diario para mantener el ritmo del sprint. Cumplirs las fechas de entrega comprometidas y gestionars riesgos comunicando desviaciones a tiempo. Para incorporarte como Data Engineer en Derevo, es necesario tener un manejo avanzado del idioma ingls (Conversaciones tcnicas y de negocios, B2+ o C1) y habilidades tcnicas en: - Lenguajes de Consulta y Programacin: T-SQL / Spark SQL, Python (PySpark), JSON / REST APIs, Microsoft Fabric. - Lenguajes de Consulta y Programacin: T-SQL / Spark SQL, Python (PySpark), JSON / REST APIs, Microsoft Fabric. Adems, es importante que te identifiques con habilidades blandas y de negocio como la comunicacin cercana, trabajo en Squads, proactividad y colaboracin, aprendizaje constante, responsabilidad y organizacin, consultora de datos, gestin de requerimientos, estrategia alineada al cliente y presentacin a clientes. Entre los beneficios que tendrs en Derevo se encuentran el impulso a tu bienestar integral, oportunidad de especializarte en diferentes reas y tecnologas, libertad para crear, participacin en proyectos tecnolgicos punteros y un esquema de trabajo remoto flexible y estructurado. Si cumples con la mayora de los requisitos y te interesa el perfil, no dudes en postularte para convertirte en un derevian y desarrollar tu superpoder. Nuestro equipo de Talent te contactar!,
Posted 2 weeks ago
5.0 - 6.0 years
12 - 16 Lacs
Thiruvananthapuram
Remote
Build & manage infra for data strage,process & Analysis Exp in AWS Cloud Services (Glue, Lambda, Athena, Lakehouse) AWS CDK for Infrastructure-as-Code (IaC) with typescript Skills in Python, Pyspark, Spark SQL, Typescript Required Candidate profile 5 to 6 Years Data pipeline development & orchestration using AWS Glue Leadership experience UK Clients, Work timings will be aligned with the client's requirements and may follow UK time zones
Posted 2 weeks ago
3.0 - 6.0 years
12 - 16 Lacs
Thiruvananthapuram
Work from Office
AWS Cloud Services (Glue, Lambda, Athena, Lakehouse) AWS CDK for Infrastructure-as-Code (IaC) with typescript Data pipeline development & orchestration using AWS Glue Strong programming skills in Python, Pyspark, Spark SQL, Typescript Required Candidate profile 3 to 5 Years Client-facing and team leadership experience Candidates have to work with UK Clients, Work timings will be aligned with the client's requirements and may follow UK time zones
Posted 2 weeks ago
9.0 - 14.0 years
25 - 40 Lacs
Chennai
Work from Office
Role & responsibilities We are seeking a Data Modeller with over 12+ years of progressive experience in information technology, including a minimum of 4 years in a Data migration projects to cloud(refactor, replatform etc) and 2 years exposer to GCP. Preferred candidate profile In-depth knowledge of Data Warehousing/Lakehouse architectures, Master Data Management, Data Quality Management, Data Integration, and Data Warehouse architecture. Work with the business intelligence team to gather requirements for the database design and model Understand current on-premise DB model and refactoring to Google cloud for better performance. Knowledge of ER modeling, big data, enterprise data, and physical data models designs and implements data structures to support business processes and analytics, ensuring efficient data storage, retrieval, and management Create a logical data model and validate it to ensure it meets the demands of the business application and its users Experience in developing physical Model for SQL, No SQL, Key-Value pair, document database like Oracle, BigQuery, spanner, Postgresql, firestore, mongo DB etc Understand the data needs of the company or client Collaborate with the development team to design and build the database model for both Application and Datawarehousing development Classify the business needs and build both MicroServices & Reporting Database Model Strong hands on experience in SQL, Database procedures Work with the development team to develop and implement phase wise migration plan, go existing of on-prem and cloud DB, Help determine and manage data cleaning requirements
Posted 2 weeks ago
8.0 - 13.0 years
8 - 17 Lacs
Chennai
Remote
MS Fabric (Data Lake, OneLake, Lakehouse, Warehouse, Real-Time Analytics) and integration with Power BI, Synapse, and Azure Data Factory. DevOps Knowledge Team Leading experience
Posted 4 weeks ago
8.0 - 13.0 years
8 - 17 Lacs
Chennai
Remote
MS Fabric (Data Lake, OneLake, Lakehouse, Warehouse, Real-Time Analytics) and integration with Power BI, Synapse, and Azure Data Factory. DevOps Knowledge Team Leading experience
Posted 4 weeks ago
8.0 - 13.0 years
6 - 11 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Knowledge of data lake/lakehouse architectures is a plus.
Posted 1 month ago
8.0 - 13.0 years
6 - 11 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Knowledge of data lake/lakehouse architectures is a plus.
Posted 1 month ago
8.0 - 13.0 years
6 - 11 Lacs
Pune, Maharashtra, India
On-site
Key Responsibilities: Develop ingestion pipelines (batch & stream) to move data to S3. Convert HiveQL to SparkSQL/PySpark. Orchestrate workflows using MWAA (Airflow). Build and manage Iceberg tables with proper partitioning and metadata. Perform job validation and implement unit testing. Required Skills: 35 years of data engineering experience, with strong AWS expertise. Proficient in EMR (Spark), S3, PySpark, and SQL. Familiar with Cloudera/HDFS and legacy Hadoop pipelines. Knowledge of data lake/lakehouse architectures is a plus.
Posted 1 month ago
8.0 - 12.0 years
20 - 25 Lacs
Chennai
Remote
Databricks and AWS (S3, Glue, EMR, Kinesis, Lambda, IAM, CloudWatch). • Primary language: Python; strong skills in Spark SQ
Posted 1 month ago
5.0 - 10.0 years
10 - 20 Lacs
Pune
Work from Office
Dear Candidate, We are excited to share an opportunity at Avigna.AI for the position of Data Engineer . We're looking for professionals with strong data engineering experience who can contribute to building scalable, intelligent data solutions and have a passion for solving complex problems. Position Details: Role: Data Engineer Location: Pune, Baner (Work from Office) Experience: 7+ years Working Days: Monday to Friday (9:00 AM 6:00 PM) Education: Bachelors or Master’s in Computer Science, Engineering, Mathematics, or related field Company Website: www.avigna.ai LinkedIn: Avigna.AI Key Responsibilities: Design and develop robust data pipelines for large-scale data ingestion, transformation, and analytics. Implement scalable Lakehouse architectures using tools like Microsoft Fabric for structured and semi-structured data. Work with Python , PySpark , and Azure services to support data modelling, automation, and predictive insights. Develop custom KQL queries and manage data using Power BI , Azure Cosmos DB , or similar tools. Collaborate with cross-functional teams to integrate data-driven components with application backends and frontends. Ensure secure, efficient, and reliable CI/CD pipelines for automated deployments and data updates. Skills & Experience Required: Strong proficiency in Python , PySpark , and cloud-native data tools Experience with Microsoft Azure services (e.g., App Services, Functions, Cosmos DB, Active Directory) Hands-on experience with Microsoft Fabric (preferred or good to have) Working knowledge of Power BI and building interactive dashboards for business insights Familiarity with CI/CD practices for automated deployments Exposure to machine learning integration into data workflows (nice to have) Strong analytical and problem-solving skills with attention to detail Good to Have: Experience with KQL (Kusto Query Language) Background in simulation models or mathematical modeling Knowledge of Power Platform integration (Power Pages, Power Apps) Benefits : Competitive salary. Health insurance coverage. Professional development opportunities. Dynamic and collaborative work environment. Important Note: Kindly share your resumes to talent@avigna.ai When sharing your profile, please copy paste the below content in the subject line: Subject: Applying for Data Engineer role JOBID:ZR_14_JOB
Posted 1 month ago
5.0 - 8.0 years
5 - 8 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Hands on experience in Test Automation tools such as Playwright, Protractor and must have TypeScript and JavaScript knowledge. Experience in Playwright is must. Lead testing efforts, mentor team, and ensure quality assurance. Candidate should have good interpersonal and communication skills with Good testing knowledgeBankingTypeScript,
Posted 1 month ago
3.0 - 7.0 years
2 - 11 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Databricks, Python, Pyspark, B4HANA, SQLhands on experience, Lakehouse knowledge, CI&CD Tasks to ingest data from a different internal source system via Kafka connector (will be built by another team) into bronze, clean data and implement data quality checks (a.o. reconciliation, business rules). Code business rules in an efficient and effective way with good coding principles that other developers in the team easily understand and can built upon. Make data available on a regular frequency without human intervention for a consumption layer according to business requirements and with 99% availability and trustworthiness Drive functional and technical discussions independently with stakeholders. DevOps understanding Should be flexible to work on both Development and L2 Support tasks Databricks, Python, Pyspark, B4HANA, SQLhands on experience, Lakehouse knowledge, CI&CD Tasks to ingest data from a different internal source system via Kafka connector (will be built by another team) into bronze, clean data and implement data quality checks (a.o. reconciliation, business rules). Code business rules in an efficient and effective way with good coding principles that other developers in the team easily understand and can built upon. Make data available on a regular frequency without human intervention for a consumption layer according to business requirements and with 99% availability and trustworthiness Drive functional and technical discussions independently with stakeholders. DevOps understanding Should be flexible to work on both Development and L2 Support tasks
Posted 1 month ago
8.0 - 13.0 years
25 - 40 Lacs
Chennai
Work from Office
Architect & Build Scalable Systems: Design and implement a petabyte-scale lakehouse Architectures to unify data lakes and warehouses. Real-Time Data Engineering: Develop and optimize streaming pipelines using Kafka, Pulsar, and Flink. Required Candidate profile Data engineering experience with large-scale systems• Expert proficiency in Java for data-intensive applications. Handson experience with lakehouse architectures, stream processing, & event streaming
Posted 1 month ago
4.0 - 6.0 years
15 - 25 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Warm Greetings from SP Staffing!! Role: AWS Data Engineer Experience Required :4 to 6 yrs Work Location :Bangalore/Pune/Hyderabad/Chennai Required Skills, Pyspark AWS Glue Interested candidates can send resumes to nandhini.spstaffing@gmail.com
Posted 1 month ago
8.0 - 13.0 years
18 - 33 Lacs
Bengaluru
Hybrid
Warm Greetings from SP Staffing!! Role: AWS Data Engineer Experience Required :8 to 15 yrs Work Location :Bangalore Required Skills, Technical knowledge of data engineering solutions and practices. Implementation of data pipelines using tools like EMR, AWS Glue, AWS Lambda, AWS Step Functions, API Gateway, Athena Proficient in Python and Spark, with a focus on ETL data processing and data engineering practices. Interested candidates can send resumes to nandhini.spstaffing@gmail.com
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France