Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Data Engineer at our company, you will be responsible for designing and implementing Azure Synapse Analytics solutions for data processing and reporting. Your role will involve optimizing ETL pipelines, SQL pools, and Synapse Spark workloads to ensure efficient data processing. It will also be crucial for you to uphold data quality, security, and governance best practices while collaborating with business stakeholders to develop data-driven solutions. Additionally, part of your responsibilities will include mentoring a team of data engineers. To excel in this role, you should have 6-10 years of experience in Data Engineering, BI, or Cloud Analytics. Your expertise in Azure Synapse, Azure Data Factory, SQL, and ETL processes will be essential. Experience with Fabric is strongly desirable, and possessing strong leadership, problem-solving, and stakeholder management skills is crucial. Knowledge of Power BI, Python, or Spark would be a plus. You should also have deep knowledge of Data Modelling techniques, Design and development of ETL Pipelines, Azure Resources Cost Management, and proficiency in writing complex SQL queries. Furthermore, you are expected to have knowledge and experience in Master Data/metadata management, including Data Governance, Data Quality, Data Catalog, and Data Security. Your ability to manage a complex and rapidly evolving business, actively lead, develop, and support team members will be key. As an Agile practitioner and advocate, you must be highly dynamic in your approach, adapting to constant changes in risks and forecasts. Your role will involve ensuring data integrity within the dimensional model by validating data and identifying inconsistencies. You will also work closely with Product Owners and data engineers to translate business needs into effective dimensional models. This position offers the opportunity to lead AI-driven data integration projects in real estate technology, work in a collaborative and innovative environment with global teams, and receive competitive compensation, career growth opportunities, and exposure to cutting-edge technologies. Ideally, you should hold a Bachelors/masters degree in software engineering, Computer Science, or a related area. Our company offers a range of benefits, including hybrid working arrangements, an annual performance-related bonus, Flexi any days, medical insurance coverage for extended family members, and an engaging, fun, and inclusive culture. MRI Software is dedicated to delivering innovative applications and hosted solutions that empower real estate companies to elevate their business. With a strong focus on meeting the unique needs of real estate businesses globally, we have grown to include offices across various countries with over 4000 team members supporting our clients. MRI is proud to be an Equal Employment Opportunity employer.,
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
The responsibilities of the role involve designing and implementing Azure Synapse Analytics solutions for data processing and reporting. You will be required to optimize ETL pipelines, SQL pools, and Synapse Spark workloads while ensuring data quality, security, and governance best practices are followed. Collaborating with business stakeholders to develop data-driven solutions and mentoring a team of data engineers are key aspects of this position. To excel in this role, you should possess 6-10 years of experience in Data Engineering, BI, or Cloud Analytics. Expertise in Azure Synapse, Azure Data Factory, SQL, and ETL processes is essential. Experience with Fabric is strongly desirable. Strong leadership, problem-solving, and stakeholder management skills are required. Additionally, knowledge of Power BI, Python, or Spark is a plus. Deep understanding of Data Modelling techniques, Design and development of ETL Pipelines, Azure Resources Cost Management, and writing complex SQL queries are important competencies. Familiarity with Best Authorization and security practices for Azure components, Master Data/metadata management, and data governance is crucial. Being able to manage a complex and rapidly evolving business and actively lead, develop, and support team members is vital. An Agile mindset and the ability to adapt to constant changes in risks and forecasts are expected. Thorough knowledge of data warehouse architecture, principles, and best practices is necessary. Expertise in designing star and snowflake schemas, identifying facts and dimensions, and selecting appropriate granularity levels is also required. Ensuring data integrity within the dimensional model by validating data and identifying inconsistencies is part of the role. You will work closely with Product Owners and data engineers to translate business needs into effective dimensional models. Joining MRI Software offers the opportunity to lead AI-driven data integration projects in real estate technology, work in a collaborative and innovative environment with global teams, and access competitive compensation, career growth opportunities, and exposure to cutting-edge technologies. The ideal candidate should hold a Bachelor's/Master's degree in software engineering, Computer Science, or a related area. The benefits of this position include hybrid working arrangements, an annual performance-related bonus, 6x Flexi any days, medical insurance coverage for extended family members, and an engaging, fun, and inclusive culture at MRI Software. MRI Software is a company that delivers innovative applications and hosted solutions to empower real estate companies to enhance their business. With a flexible technology platform and an open and connected ecosystem, we cater to the unique needs of real estate businesses globally. With offices across various countries and a diverse team, we provide expertise and insight to support our clients effectively. MRI Software is proud to be an Equal Employment Opportunity employer.,
Posted 1 week ago
6.0 - 10.0 years
30 - 35 Lacs
Hyderabad, Coimbatore, Bengaluru
Work from Office
Job Overview: We are seeking an experienced Senior Developer with strong expertise in Python, Node.js, Azure, and a proven track record of migrating cloud applications from AWS to Azure. This role requires hands-on experience in PySpark, Databricks, and Azure data services such as ADF and Synapse Spark. The ideal candidate will lead end-to-end modernization and migration initiatives, code remediation, and deployment of serverless and microservices-based applications in Azure. Key Responsibilities: Lead the migration of Python and Node.js applications from AWS to Azure. Analyze legacy AWS architecture, source code, and cloud service dependencies to identify and implement code refactoring and remediation. Develop and modernize applications using PySpark (Python API), Databricks, ADF Mapping Data Flows, and Synapse Spark. Implement and deploy serverless solutions using Azure Functions, replacing AWS Lambda where applicable. Handle migration of storage and data connectors (e.g., S3 to Azure Blob, Confluent Kafka AWS S3 Sync Connector). Convert AWS SDK usage to corresponding Azure SDK implementations. Design and implement CI/CD pipelines, deployment scripts, and configuration for containerized applications using Kubernetes, Helm charts, App Services, APIM, and AKS. Perform unit testing, application troubleshooting, and support within Azure environments. Technical Skills: Must-Have: Python and Node.js development (8+ years total experience) PySpark (Python API) Azure Functions, AKS, App Services, Azure Blob Storage AWS Lambda to Azure Functions migration (Serverless architecture) AWS to Azure SDK conversion ADF (Azure Data Factory): Mapping Data Flows Synapse Spark, Azure Databricks Containerization: Docker, Kubernetes, Helm charts CI/CD Pipelines and deployment scripting Unit testing and application debugging on Azure Proven AWS to Azure application migration experience Nice-to-Have: Confluent Kafka AWS S3 Sync Connector APIM (Azure API Management) Experience working with both PaaS and Serverless Azure infrastructures Tech Stack Highlights: Programming: Python, Node.js, PySpark Cloud Platforms: AWS, Azure Data Services: Azure Blob Storage, ADF, Synapse Spark, Databricks Serverless: AWS Lambda, Azure Functions Migration Tools: AWS SDK to Azure SDK conversion DevOps: CI/CD, Azure DevOps, Helm, Kubernetes Other: App Services, APIM, Confluent Kafka Location: Hyderabad/ Bangalore/ Coimbatore/ Pune
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France