Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
noida, uttar pradesh
On-site
As an Azure Data Engineer strong in Azure Synapse, you will be responsible for designing, developing, and maintaining data pipelines using Azure Data Factory (ADF) and Synapse. Your role will involve working with SQL databases to optimize queries and ensure efficient data processing. Additionally, you will develop and manage data warehousing solutions to support analytics and reporting, providing production support for data pipelines and reports. You will collaborate with stakeholders to understand business requirements and translate them into scalable data solutions. It will be crucial for you to ensure data quality, integrity, and governance across all data pipelines. Staying updated with industry best practices and emerging technologies in data engineering will also be part of your responsibilities. To excel in this role, you should have 3-5 years of experience in Data Engineering with a focus on Azure technologies. Hands-on experience with Azure Data Factory (ADF), Synapse, and Data Warehousing is essential. Strong expertise in SQL development, query optimization, and database performance tuning is required. You should possess experience providing production support for data pipelines and reports, along with strong problem-solving skills and the ability to work independently. Preferred qualifications include experience with Power BI, Power Query, and report development, knowledge of data security best practices, exposure to Jet Analytics, and familiarity with CI/CD for data pipelines. Joining us will offer you the opportunity to work on cutting-edge Azure Data Engineering projects in a collaborative work environment with a global team. You can expect potential for long-term engagement and career growth, along with competitive compensation based on experience.,
Posted 1 day ago
6.0 - 10.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Data Engineer at our Pune location, you will play a critical role in designing, developing, and maintaining scalable data pipelines and architectures using Data bricks on Azure/AWS cloud platforms. With 6 to 9 years of experience in the field, you will collaborate with stakeholders to integrate large datasets, optimize performance, implement ETL/ELT processes, ensure data governance, and work closely with cross-functional teams to deliver accurate solutions. Your responsibilities will include building, maintaining, and optimizing data workflows, integrating datasets from various sources, tuning pipelines for performance and scalability, implementing ETL/ELT processes using Spark and Data bricks, ensuring data governance, collaborating with different teams, documenting data pipelines, and developing automated processes for continuous integration and deployment of data solutions. To excel in this role, you should have 6 to 9 years of hands-on experience as a Data Engineer, expertise in Apache Spark, Delta Lake, Azure/AWS Data bricks, proficiency in Python, Scala, or Java, advanced SQL skills, experience with cloud data platforms, data warehousing solutions, data modeling, ETL tools, version control systems, and automation tools. Additionally, soft skills such as problem-solving, attention to detail, and ability to work in a fast-paced environment are essential. Nice to have skills include experience with Data bricks SQL and Data bricks Delta, knowledge of machine learning concepts, and experience in CI/CD pipelines for data engineering solutions. Joining our team offers challenging work with international clients, growth opportunities, a collaborative culture, and global project involvement. We provide competitive salaries, flexible work schedules, health insurance, performance-based bonuses, and other standard benefits. If you are passionate about data engineering, possess the required skills and qualifications, and thrive in a dynamic and innovative environment, we welcome you to apply for this exciting opportunity.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
As an Azure Data Engineer with expertise in Microsoft Fabric and modern data platform components, you will be responsible for designing, developing, and managing end-to-end data pipelines on Azure Cloud. Your primary focus will be on ensuring performance, scalability, and delivering business value through efficient data solutions. You will collaborate with various teams to define data requirements, implement data ingestion, transformation, and modeling pipelines supporting structured and unstructured data. Additionally, you will work with Azure Synapse, Data Lake, Data Factory, Databricks, and Power BI for seamless data integration and reporting. Your role will involve optimizing data performance and cost through efficient architecture and coding practices, ensuring data security, privacy, and compliance with organizational policies. Monitoring, troubleshooting, and improving data workflows for reliability and performance will also be part of your responsibilities. To excel in this role, you should have 5 to 7 years of experience as a Data Engineer, with at least 2+ years working on the Azure Data Stack. Hands-on experience with Microsoft Fabric, Azure Synapse Analytics, Data Factory, Data Lake, SQL Server, and Power BI integration is crucial. Strong skills in data modeling, ETL/ELT design, and performance tuning are required, along with proficiency in SQL and Python/PySpark scripting. Experience with CI/CD pipelines and DevOps practices for data solutions, understanding of data governance, security, and compliance frameworks, as well as excellent communication, problem-solving, and stakeholder management skills are essential for success in this role. A Bachelor's or Master's degree in Computer Science, Data Engineering, or a related field is preferred. Having Microsoft Azure Data Engineer Certification (DP-203), experience in Real-Time Streaming (e.g., Azure Stream Analytics or Event Hub), and exposure to Power BI semantic models and direct lake mode in Microsoft Fabric would be advantageous. Join us to work with the latest in Microsoft's modern data stack - Microsoft Fabric, collaborate with a team of passionate data professionals, work on enterprise-grade, large-scale data projects, experience a fast-paced, learning-focused work environment, and have immediate visibility and impact in key business decisions.,
Posted 1 day ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
Your Responsibilities Implement business and IT data requirements through new data strategies and designs across all data platforms (relational, dimensional, and NoSQL). Collaborate with solution teams and Data Architects to implement data strategies, build data flows, and develop logical/physical data models. Work with Data Architects to define and govern data modeling and design standards, tools, best practices, and related development for enterprise data models. Engage in hands-on modeling, design, configuration, installation, performance tuning, and sandbox POC. Proactively and independently address project requirements and articulate issues/challenges to reduce project delivery risks. Your Profile Bachelor's degree in computer/data science technical or related experience. Possess 7+ years of hands-on relational, dimensional, and/or analytic experience utilizing RDBMS, dimensional, NoSQL data platform technologies, and ETL and data ingestion protocols. Demonstrated experience with data warehouse, Data Lake, and enterprise big data platforms in multi-data-center contexts. Proficient in metadata management, data modeling, and related tools (e.g., Erwin, ER Studio). Preferred experience with services in Azure/Azure Databricks (Azure Data Factory, Azure Data Lake Storage, Azure Synapse & Azure Databricks) and working on SAP Datasphere is a plus. Experience in team management, communication, and presentation. Understanding of agile delivery methodology and experience working in a scrum environment. Ability to translate business needs into data vault and dimensional data models supporting long-term solutions. Collaborate with the Application Development team to implement data strategies, create logical and physical data models using best practices to ensure high data quality and reduced redundancy. Optimize and update logical and physical data models to support new and existing projects. Maintain logical and physical data models along with corresponding metadata. Develop best practices for standard naming conventions and coding practices to ensure data model consistency. Recommend opportunities for data model reuse in new environments. Perform reverse engineering of physical data models from databases and SQL scripts. Evaluate data models and physical databases for variances and discrepancies. Validate business data objects for accuracy and completeness. Analyze data-related system integration challenges and propose appropriate solutions. Develop data models according to company standards. Guide System Analysts, Engineers, Programmers, and others on project limitations and capabilities, performance requirements, and interfaces. Review modifications to existing data models to improve efficiency and performance. Examine new application design and recommend corrections as needed. #IncludingYou Diversity, equity, inclusion, and belonging are cornerstones of ADM's efforts to continue innovating, driving growth, and delivering outstanding performance. ADM is committed to attracting and retaining a diverse workforce and creating welcoming, inclusive work environments that enable every ADM colleague to feel comfortable, make meaningful contributions, and grow their career. ADM values the unique backgrounds and experiences that each person brings to the organization, understanding that diversity of perspectives makes us stronger together. For more information regarding ADM's efforts to advance Diversity, Equity, Inclusion & Belonging, please visit the website: Diversity, Equity and Inclusion | ADM. About ADM At ADM, the power of nature is unlocked to provide access to nutrition worldwide. With industry-advancing innovations, a comprehensive portfolio of ingredients and solutions catering to diverse tastes, and a commitment to sustainability, ADM offers customers an edge in addressing nutritional challenges. As a global leader in human and animal nutrition and the premier agricultural origination and processing company worldwide, ADM's capabilities in insights, facilities, and logistical expertise are unparalleled. From ideation to solution, ADM enriches the quality of life globally. Learn more at www.adm.com.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Do you have in-depth experience in Nat Cat models and tools Do you enjoy being part of a distributed team of Cat Model specialists with diverse backgrounds, educations, and skills Are you passionate about researching, debugging issues, and developing tools from scratch We are seeking a curious individual to join our NatCat infrastructure development team. As a Cat Model Specialist, you will collaborate with the Cat Perils Cat & Geo Modelling team to maintain models, tools, and applications used in the NatCat costing process. Your responsibilities will include supporting model developers in validating their models, building concepts and tools for exposure reporting, and assisting in model maintenance and validation. You will be part of the Cat & Geo Modelling team based in Zurich and Bangalore, which specializes in natural science, engineering, and statistics. The team is responsible for Swiss Re's global natural catastrophe risk assessment and focuses on advancing innovative probabilistic and proprietary modelling technology for earthquakes, windstorm, and flood hazards. Main Tasks/Activities/Responsibilities: - Conceptualize and build NatCat applications using sophisticated analytical technologies - Collaborate with model developers to implement and test models in the internal framework - Develop and implement concepts to enhance the internal modelling framework - Coordinate with various teams for successful model and tool releases - Provide user support on model and tools related issues - Install and maintain the Oasis setup and contribute to the development of new functionality - Coordinate platform setup and maintenance with 3rd party vendors About You: - Graduate or Post-Graduate degree in mathematics, engineering, computer science, or equivalent quantitative training - Minimum 5 years of experience in the Cat Modelling domain - Reliable, committed, hands-on, with experience in Nat Cat modelling - Previous experience with catastrophe models or exposure reporting tools is a plus - Strong programming skills in MATLAB, MS SQL, Python, Pyspark, R - Experience in consuming WCF/RESTful services - Knowledge of Business Intelligence, reporting, and data analysis solutions - Experience in agile development environment is beneficial - Familiarity with Azure services like Storage, Data Factory, Synapse, and Databricks - Good interpersonal skills, self-driven, and ability to work in a global team - Strong analytical and problem-solving skills About Swiss Re: Swiss Re is a leading provider of reinsurance, insurance, and insurance-based risk transfer solutions. With over 14,000 employees worldwide, we anticipate and manage various risks to make the world more resilient. We cover a wide range of risks from natural catastrophes to cybercrime, offering solutions in both Property & Casualty and Life & Health sectors. If you are an experienced professional returning to the workforce after a career break, we welcome you to apply for positions that match your skills and experience.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
telangana
On-site
We are looking for a Data Engineer to join our team in a remote position with a notice period of Immediate to 30 Days. As a Data Engineer, you will be responsible for designing and developing data pipelines and data products on the Azure cloud platform. Your role will involve utilizing Azure Data Factory, Azure Synapse, Azure SQL Database, Azure Data Lake Storage, and other Azure services as part of the Clients Data Platform Infrastructure. Collaboration with cross-functional teams will be essential to ensure that data solutions meet business requirements. You will also be expected to implement best practices for data management, quality, and security, as well as optimize and troubleshoot data workflows to ensure performance and reliability. The ideal candidate should have expertise in Azure Data Factory, Azure Synapse, Azure SQL Database, and Azure Data Lake Storage, along with experience in data pipeline design and development. A strong understanding of data architecture and cloud-based data solutions, as well as proficiency in SQL and data modeling, are crucial for this role. Excellent problem-solving skills and attention to detail will be highly valued. Qualifications for this position include a Bachelor's degree in Computer Science, Information Technology, or a related field, along with at least 3 years of experience as a Data Engineer or in a similar role. Strong communication and teamwork skills are essential, as well as the ability to manage multiple projects and meet deadlines effectively.,
Posted 2 days ago
3.0 - 7.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Project Lead (Data) at our esteemed organization, you will play a crucial role in translating business requirements into technical specifications and leading the design, development, and deployment of Business Intelligence (BI) solutions. Your responsibilities will include maintaining and supporting data analytics platforms, collaborating with cross-functional teams, executing database queries and analyses, creating visualizations, and updating technical documentation. To excel in this role, you should possess a minimum of 5 years of experience in designing and implementing reports, dashboards, ETL processes, and data warehouses. Additionally, you should have at least 3 years of direct management experience. A strong understanding of data warehousing and database concepts is essential, along with expertise in BI fundamentals. Proficiency in tools such as Microsoft SQL Server, SSIS, SSRS, Azure Data Factory, Azure Synapse, and Power BI will be highly advantageous. Your role will involve defining software development aspects, communicating concepts and guidelines effectively to the team, providing technical guidance and coaching, and overseeing the progress of report/dashboard development to ensure alignment with data warehouse and RDBMS design principles. Engaging with stakeholders to identify key performance indicators (KPIs) and presenting actionable insights through reports and dashboards will be a key aspect of your responsibilities. The ideal candidate for this position will exhibit proven analytical and problem-solving abilities, possess excellent interpersonal and written communication skills, and be adept at working in a collaborative environment. If you are passionate about leveraging data to drive business decisions and possess the requisite skills and experience, we invite you to join our dynamic team and contribute to our continued success. Join us in our journey of innovation and excellence as we continue to serve our global clientele with end-to-end IT and ICT solutions.,
Posted 2 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
As a Microsoft Fabric Consultant - Senior at EY, you will leverage your expertise to contribute to the design, development, and deployment of Business Intelligence (BI) solutions. With 4+ years of experience in Power BI, you will be responsible for translating business requirements into technical specifications and creating visualizations to support data-driven decision-making. Your primary responsibilities will include designing database architecture for dashboards, conducting unit testing, troubleshooting BI systems, and collaborating with global teams to integrate systems. Your proficiency in Power BI tools such as DAX, Power Query, and SQL will be essential in developing and executing database queries, conducting analyses, and creating reports for various projects. In addition to your technical skills, your soft skills will play a crucial role in this role. Excellent communication skills, being a team player, self-starter, and highly motivated individual, along with the ability to handle high-pressure and fast-paced situations will be key attributes for success. Your presentation skills and experience working with globally distributed teams will be valuable in effectively collaborating and delivering exceptional BI solutions. While experience with Azure Data Factory, Azure Synapse, Python/R, PowerApps, Power Automate, and design skills like Adobe XD/Figma are considered beneficial, they are not mandatory. Your focus will be on designing, building, and deploying BI solutions, evaluating and improving existing systems, and developing and updating technical documentation to ensure the success of projects. At EY, we are committed to building a better working world by providing trust through assurance and helping clients grow, transform, and operate in over 150 countries. By joining our diverse team, you will have the opportunity to contribute your unique voice and perspective to drive innovation and create a positive impact on clients, people, and society. Join us at EY and embark on a rewarding journey to become the best version of yourself while contributing to a better working world for all.,
Posted 2 days ago
2.0 - 6.0 years
0 Lacs
jaipur, rajasthan
On-site
As a Databricks Engineer specializing in the Azure Data Platform, you will be responsible for designing, developing, and optimizing scalable data pipelines within the Azure ecosystem. You should have hands-on experience with Python-based ETL development, Lakehouse architecture, and building Databricks workflows utilizing the bronze-silver-gold data modeling approach. Your key responsibilities will include developing and maintaining ETL pipelines using Python and Apache Spark in Azure Databricks, implementing and managing bronze-silver-gold data lake layers using Delta Lake, and working with various Azure services such as Azure Data Lake Storage (ADLS), Azure Data Factory (ADF), and Azure Synapse for end-to-end pipeline orchestration. It will be crucial to ensure data quality, integrity, and lineage across all layers of the data pipeline, optimize Spark performance, manage cluster configurations, and schedule jobs effectively in Databricks. Collaboration with data analysts, architects, and business stakeholders to deliver data-driven solutions will also be part of your role. To be successful in this role, you should have at least 3+ years of experience with Python in a data engineering environment, 2+ years of hands-on experience with Azure Databricks and Apache Spark, and a strong background in building scalable data lake pipelines following the bronze-silver-gold architecture. Additionally, in-depth knowledge of Delta Lake, Parquet, and data versioning, along with familiarity with Azure Data Factory, ADLS Gen2, and SQL is required. Experience with CI/CD pipelines and job orchestration tools such as Azure DevOps or Airflow would be advantageous. Excellent communication skills, both verbal and written, are essential. Nice to have qualifications include experience with data governance, security, and monitoring in Azure, exposure to real-time streaming or event-driven pipelines (Kafka, Event Hub), and an understanding of MLflow, Unity Catalog, or other data cataloging tools. By joining our team, you will have the opportunity to be part of high-impact, cloud-native data initiatives, work in a collaborative and growth-oriented team focused on innovation, and contribute to modern data architecture standards using the latest Azure technologies. If you are ready to advance your career as a Databricks Engineer in the Azure Data Platform, please send your updated resume to hr@vidhema.com. We look forward to hearing from you and potentially welcoming you to our team.,
Posted 2 days ago
7.0 - 11.0 years
0 Lacs
chandigarh
On-site
As a Senior Azure Data Engineer at iO Associates in Mohali, you will be responsible for building and optimizing data pipelines, supporting data integration across systems, and enhancing the Azure-based Enterprise Data Platform. The company leads the real estate sector with headquarters in Mohali and offices in the US and over 17 other countries. Your key responsibilities will include building and enhancing the Azure-based EDP using modern tools like Databricks, Synapse, ADF, and ADLS Gen2. You will develop and maintain ETL pipelines, collaborate with teams to deliver efficient data solutions, create data products for enterprise-wide use, mentor team members, promote code reusability, and contribute to documentation, reviews, and architecture planning. To excel in this role, you should have at least 7 years of experience in data engineering with expertise in Databricks, Python, Scala, Azure Synapse, and ADF. You should have a proven track record of building and managing ETL/data pipelines across various sources and formats, along with strong skills in data modeling, warehousing, and CI/CD practices. This is an excellent opportunity to join a company that values your growth, emphasizes work-life balance, and recognizes your contributions. If you are interested in this position, please email at [Email Address].,
Posted 2 days ago
8.0 - 13.0 years
20 - 35 Lacs
Pune, Bengaluru, Mumbai (All Areas)
Hybrid
Datawarehouse Database Architect - Immediate hiring. We are currently looking for Datawarehouse Database Architect for our client who are into Fintech solutions. Please let us know your interest and availability Experience: 10 plus years of experience Locations: Hybrid Any Accion offices in India pref (Bangalore /Pune/Mumbai) Notice Period: Immediate – 0 – 15 days joiners are preferred Required skills: Tools & Technologies Cloud Platform : Azure (Data Bricks, DevOps, Data factory, azure synapse Analytics, Azure SQL, blob storage, Databricks Delta Lake) Languages : Python/PL/SQL/SQL/C/C++/Java Databases : Snowflake/ MS SQL Server/Oracle Design Tools : Erwin & MS Visio. Data warehouse tools : SSIS, SSRS, SSAS. Power Bi, DBT, Talend Stitch, PowerApps, Informatica 9, Cognos 8, OBIEE. Any cloud exp is good to have Let’s connect for more details. Please write to me at mary.priscilina@accionlabs.com along with your cv and with the best contact details to get connected for a quick discussion. Regards, Mary Priscilina
Posted 2 days ago
8.0 - 12.0 years
30 - 35 Lacs
Chennai
Remote
Job Title:- Sr. Python Data Engineer Location:- Chennai & Bangalore (REMOTE) Job Type:- Permanent Employee Experience :- 8 to 12 Years Shift: 2 11 PM Responsibilities Design and develop data pipelines and ETL processes. Collaborate with data scientists and analysts to understand data needs. Maintain and optimize data warehousing solutions. Ensure data quality and integrity throughout the data lifecycle. Develop and implement data validation and cleansing routines. Work with large datasets from various sources. Automate repetitive data tasks and processes. Monitor data systems and troubleshoot issues as they arise. Qualifications Bachelor’s degree in Computer Science, Information Technology, or a related field. Proven experience as a Data Engineer or similar role (Minimum 6+ years’ experience as Data Engineer). Strong proficiency in Python and PySpark. Excellent problem-solving abilities. Strong communication skills to collaborate with team members and stakeholders. Individual Contributor Technical Skills Required Expert Python, PySpark and SQL/Snowflake Advanced Data warehousing, Data pipeline design – Advanced Level Data Quality, Data validation, Data cleansing – Advanced Level Intermediate/Basic Microsoft Fabric, ADF, Databricks, Master Data management/Data Governance Data Mesh, Data Lake/Lakehouse Architecture
Posted 2 days ago
12.0 - 16.0 years
10 - 14 Lacs
Pune
Work from Office
IT MANAGER, DATA ENGINEERING AND ANALYTICS will lead a team of data engineers and analysts responsible for designing, developing, and maintaining robust data systems and integrations. This role is critical for ensuring the smooth collection, transformation, integration and visualization of data, making it easily accessible for analytics and decision-making across the organization. The Manager will collaborate closely with analysts, developers, business leaders and other stakeholders to ensure that the data infrastructure meets business needs and is scalable, reliable, and efficient. What Youll Do: Team Leadership: Manage, mentor, and guide a team of data engineers and analysts, ensuring their professional development and optimizing team performance. Foster a culture of collaboration, accountability, and continuous learning within the team. Lead performance reviews, provide career guidance, and handle resource planning. Data Engineering & Analytics: Design and implement data pipelines, data models, and architectures that are robust, scalable, and efficient. Develop and enforce data quality frameworks to ensure accuracy, consistency, and reliability of data assets. Establish and maintain data lineage processes to track the flow and transformation of data across systems. Ensure the design and maintenance of robust data warehousing solutions to support analytics and reporting needs. Collaboration and Stakeholder Management: Collaborate with stakeholders, including functional owners, analysts and business leaders, to understand business needs and translate them into technical requirements. Work closely with these stakeholders to ensure the data infrastructure supports organizational goals and provides reliable data for business decisions. Build and Foster relationships with major stakeholders to ensure Management perspectives on Data Strategy and its alignment with Business objectives. Project Management: Drive end-to-end delivery of analytics projects, ensuring quality and timeliness. Manage project roadmaps, prioritize tasks, and allocate resources effectively. Manage project timelines and mitigate risks to ensure timely delivery of high-quality data engineering projects. Technology and Infrastructure: Evaluate and implement new tools, technologies, and best practices to improve the efficiency of data engineering processes. Oversee the design, development, and maintenance of data pipelines, ensuring that data is collected, cleaned, and stored efficiently. Ensure there are no data pipeline leaks and monitor production pipelines to maintain their integrity. Familiarity with reporting tools such as Superset and Tableau is beneficial for creating intuitive data visualizations and reports. Machine Learning and GenAI Integration: Machine Learning: Knowledge of machine learning concepts and integration with data pipelines is a plus. This includes understanding how machine learning models can be used to enhance data quality, predict data trends, and automate decision-making processes. GenAI: Familiarity with Generative AI (GenAI) concepts and exposure is advantageous, particularly in enabling GenAI features on new datasets. Leveraging GenAI with data pipelines to automate tasks, streamline workflows, and uncover deeper insights is beneficial. What Youll Bring: 12+ years of experience in data engineering, with at least 3 years in a managerial role. Technical Expertise: Strong knowledge of data engineering concepts, including data warehousing, ETL processes, and data pipeline design. Proficiency in Azure Synapse or data factory, SQL, Python, and other data engineering tools. Data Modeling: Expertise in data modeling is essential, with the ability to design and implement robust, scalable data models that support complex analytics and reporting needs. Experience with data modeling frameworks and tools is highly valued. Leadership Skills: Proven ability to lead and motivate a team of engineers while managing cross-functional collaborations. Problem-Solving: Strong analytical and troubleshooting skills to address complex data-related challenges. Communication: Excellent verbal and written communication skills to effectively interact with technical and non-technical stakeholders. This includes the ability to motivate team members, provide regular constructive feedback, and facilitate open communication channels to ensure team alignment and success. Data Architecture: Experience with designing scalable, high-performance data systems and understanding cloud platforms such as Azure, Data Bricks. Machine Learning and GenAI: Knowledge of machine learning concepts and integration with data pipelines, as well as familiarity with GenAI, is a plus. Data Governance: Experience with data governance best practices is desirable. Open Mindset: An open mindset with a willingness to learn new technologies, processes, and methodologies is essential. The ability to adapt quickly to evolving data engineering landscapes and embrace innovative solutions is highly valued.
Posted 2 days ago
8.0 - 12.0 years
14 - 20 Lacs
Bengaluru
Work from Office
Azure Data Engineer Experience in Azure Data Factory, Databricks, Azure data lake and Azure SQL Server. Developed ETL/ELT process using SSIS and/or Azure Data Factory. Build complex pipelines & dataflows using Azure Data Factory. Designing and implementing data pipelines using in Azure Data Factory (ADF). Improve functionality/ performance of existing data pipelines. Performance tuning processes dealing with very large data sets. Configuration and Deployment of ADF packages. Proficient of the usage of ARM Template, Key Vault, Integration runtime. Adaptable to work with ETL frameworks and standards. Strong analytical and troubleshooting skill to root cause issue and find solution. Propose innovative, feasible and best solutions for the business requirements. Knowledge on Azure technologies / services such as Blob storage, ADLS, Logic Apps, Azure SQL, Web Jobs.. Expert in Service now , Incidents ,JIRA. Should have exposure agile methodology. Expert in understanding , building powerBI reports using latest methodologies
Posted 2 days ago
14.0 - 19.0 years
30 - 45 Lacs
Bengaluru
Work from Office
Role & responsibilities Eligibility Criteria: Years of Experience: Minimum 14 years Experience with Data Analysis/Data Profiling, Visualization tools (Power BI) Experience in Database and Data warehouse tech (Azure Synapse/SQL Server/SAP HANA/MS fabric) Experience in Stakeholder management/requirement gathering/delivery cycle. Bachelors Degree: Math/Statistics/Operations Research/Computer Science Master’s Degree: Business Analytics (with a background in Computer Science) Primary Responsibilities: Translate complex data analyses into clear, engaging narratives tailored to diverse audiences. Develop impactful data visualizations and dashboards using tools like Power BI or Tableau. Educate and Mentor team to develop the insightful dashboards by using multiple Data Story telling methodologies . Collaborate with Data Analysts, Data Scientists, Business Analyst and Business stakeholders to uncover insights. Understand business goals and align analytics storytelling to drive strategic actions. Create presentations, reports, and visual content to communicate insights effectively. - Maintain consistency in data communication and ensure data-driven storytelling best practices. Mandatory Skills required to perform the job: Data Analysis skills, experience in extracting information from databases, Office 365 Professional and Proven Data Storyteller through BI Experience in Agile/SCRUM process and development using any tools. Knowledge of SAP systems (SAP ECC T-Codes & Navigation) Proven ability to tell stories with data, combining analytical rigor with creativity. Strong skills in data visualization tools (e.g., Tableau, Power BI) and presentation tools (e.g., PowerPoint, Google Slides). Proficiency in SQL and basic understanding of statistical methods or Python/R is a plus. Excellent communication and collaboration skills. Ability to distill complex information into easy-to-understand formats. Desirable Skills: Background in journalism, design, UX, or marketing alongside analytics. Experience working in fast-paced, cross-functional teams. Familiarity with data storytelling frameworks or narrative design. Expected Outcome 1. Provide on-the-job training for leads on actionable insights. 2. Educate business partners on data literacy and actionable insights. 3. Lead change management initiatives (related to Data Storytelling and Data Literacy) in the organization. 4. Implement processes based on data storytelling concepts and establish a governance model to ensure dashboards are released with the appropriate insights. 5. Standardize dashboards and reports to provide actionable insights. 6. Utilize the most suitable data representation techniques. Preferred candidate profile
Posted 2 days ago
8.0 - 13.0 years
25 - 40 Lacs
Mumbai, Hyderabad
Work from Office
Essential Services: Role & Location fungibility At ICICI Bank, we believe in serving our customers beyond our role definition, product boundaries, and domain limitations through our philosophy of customer 360-degree. In essence, this captures our belief in serving the entire banking needs of our customers as One Bank, One Team . To achieve this, employees at ICICI Bank are expected to be role and location-fungible with the understanding that Banking is an essential service . The role descriptions give you an overview of the responsibilities, it is only directional and guiding in nature. About the Role: As a Data Warehouse Architect, you will be responsible for managing and enhancing data warehouse that manages large volume of customer-life cycle data flowing in from various applications within guardrails of risk and compliance. You will be managing the day-to-day operations of data warehouse i.e. Vertica. In this role responsibility, you will manage a team of data warehouse engineers to develop data modelling, designing ETL data pipeline, issue management, upgrades, performance fine-tuning, migration, governance and security framework of the data warehouse. This role enables the Bank to maintain huge data sets in a structured manner that is amenable for data intelligence. The data warehouse supports numerous information systems used by various business groups to derive insights. As a natural progression, the data warehouses will be gradually migrated to Data Lake enabling better analytical advantage. The role holder will also be responsible for guiding the team towards this migration. Key Responsibilities: Data Pipeline Design: Responsible for designing and developing ETL data pipelines that can help in organising large volumes of data. Use of data warehousing technologies to ensure that the data warehouse is efficient, scalable, and secure. Issue Management: Responsible for ensuring that the data warehouse is running smoothly. Monitor system performance, diagnose and troubleshoot issues, and make necessary changes to optimize system performance. Collaboration: Collaborate with cross-functional teams to implement upgrades, migrations and continuous improvements. Data Integration and Processing: Responsible for processing, cleaning, and integrating large data sets from various sources to ensure that the data is accurate, complete, and consistent. Data Modelling: Responsible for designing and implementing data modelling solutions to ensure that the organizations data is properly structured and organized for analysis. Key Qualifications & Skills: Education Qualification: B.E./B. Tech. in Computer Science, Information Technology or equivalent domain with 10 to 12 years of experience and at least 5 years or relevant work experience in Datawarehouse/ mining/BI/MIS. Experience in Data Warehousing: Knowledge on ETL and data technologies and outline future vision in OLTP, OLAP (Oracle / MS SQL). Data Modelling, Data Analysis and Visualization experience (Analytical tools experience like Power BI / SAS / ClickView / Tableu etc). Good to have exposure to Azure Cloud Data platform services like COSMOS, Azure Data Lake, Azure Synapse, and Azure Data factory. Synergize with the Team: Regular interaction with business/product/functional teams to create mobility solutions. Certification: Azure certified DP 900, PL 300, DP 203 or any other Data platform/Data Analyst certifications. About the Business Group The Technology Group at ICICI Bank is at the forefront of our operations and offerings, which are focused on leveraging state-of-the-art technology to provide customer-centric solutions. This group plays a pivotal role in our vision of the transition from Bank to Bank Tech. Further, the group offers round-the-clock support to our entire banking ecosystem. In our persistent efforts to provide products and solutions that genuinely touch customers, unlocking the potential of technology in every single engagement would go a long way in creating customer delight. In this endeavor, we also tirelessly ensure all our processes, systems, and infrastructure are very well within the guardrails of data security, privacy, and relevant regulations.
Posted 2 days ago
6.0 - 11.0 years
12 - 17 Lacs
Pune
Work from Office
Roles and Responsibility The Senior Tech Lead - Databricks leads the design, development, and implementation of advanced data solutions. Has To have extensive experience in Databricks, cloud platforms, and data engineering, with a proven ability to lead teams and deliver complex projects. Responsibilities: Lead the design and implementation of Databricks-based data solutions. Architect and optimize data pipelines for batch and streaming data. Provide technical leadership and mentorship to a team of data engineers. Collaborate with stakeholders to define project requirements and deliverables. Ensure best practices in data security, governance, and compliance. Troubleshoot and resolve complex technical issues in Databricks environments. Stay updated on the latest Databricks features and industry trends. Key Technical Skills & Responsibilities Experience in data engineering using Databricks or Apache Spark-based platforms. Proven track record of building and optimizing ETL/ELT pipelines for batch and streaming data ingestion. Hands-on experience with Azure services such as Azure Data Factory, Azure Data Lake Storage, Azure Databricks, Azure Synapse Analytics, or Azure SQL Data Warehouse. Proficiency in programming languages such as Python, Scala, SQL for data processing and transformation. Expertise in Spark (PySpark, Spark SQL, or Scala) and Databricks notebooks for large-scale data processing. Familiarity with Delta Lake, Delta Live Tables, and medallion architecture for data lakehouse implementations. Experience with orchestration tools like Azure Data Factory or Databricks Jobs for scheduling and automation. Design and implement the Azure key vault and scoped credentials. Knowledge of Git for source control and CI/CD integration for Databricks workflows, cost optimization, performance tuning. Familiarity with Unity Catalog, RBAC, or enterprise-level Databricks setups. Ability to create reusable components, templates, and documentation to standardize data engineering workflows is a plus. Ability to define best practices, support multiple projects, and sometimes mentor junior engineers is a plus. Must have experience of working with streaming data sources and Kafka (preferred) Eligibility Criteria: Bachelors degree in Computer Science, Data Engineering, or a related field Extensive experience with Databricks, Delta Lake, PySpark, and SQL Databricks certification (e.g., Certified Data Engineer Professional) Experience with machine learning and AI integration in Databricks Strong understanding of cloud platforms (AWS, Azure, or GCP) Proven leadership experience in managing technical teams Excellent problem-solving and communication skills Our Offering Global cutting-edge IT projects that shape the future of digital and have a positive impact on environment. Wellbeing programs & work-life balance - integration and passion sharing events. Attractive Salary and Company Initiative Benefits Courses and conferences Attractive Salary Hybrid work culture
Posted 2 days ago
8.0 - 10.0 years
20 - 35 Lacs
Ahmedabad
Remote
We are seeking a talented and experienced Senior Data Engineer to join our team and contribute to building a robust data platform on Azure Cloud. The ideal candidate will have hands-on experience designing and managing data pipelines, ensuring data quality, and leveraging cloud technologies for scalable and efficient data processing. The Data Engineer will design, develop, and maintain scalable data pipelines and systems to support the ingestion, transformation, and analysis of large datasets. The role requires a deep understanding of data workflows, cloud platforms (Azure), and strong problem-solving skills to ensure efficient and reliable data delivery. Key Responsibilities Data Ingestion and Integration: Develop and maintain data ingestion pipelines using tools like Azure Data Factory , Databricks , and Azure Event Hubs . Integrate data from various sources, including APIs, databases, file systems, and streaming data. ETL/ELT Development: Design and implement ETL/ELT workflows to transform and prepare data for analysis and storage in the data lake or data warehouse. Automate and optimize data processing workflows for performance and scalability. Data Modeling and Storage: Design data models for efficient storage and retrieval in Azure Data Lake Storage and Azure Synapse Analytics . Implement best practices for partitioning, indexing, and versioning in data lakes and warehouses. Quality Assurance: Implement data validation, monitoring, and reconciliation processes to ensure data accuracy and consistency. Troubleshoot and resolve issues in data pipelines to ensure seamless operation. Collaboration and Documentation: Work closely with data architects, analysts, and other stakeholders to understand requirements and translate them into technical solutions. Document processes, workflows, and system configurations for maintenance and onboarding purposes. Cloud Services and Infrastructure: Leverage Azure services like Azure Data Factory , Databricks , Azure Functions , and Logic Apps to create scalable and cost-effective solutions. Monitor and optimize Azure resources for performance and cost management. Security and Governance: Ensure data pipelines comply with organizational security and governance policies. Implement security protocols using Azure IAM, encryption, and Azure Key Vault. Continuous Improvement: Monitor existing pipelines and suggest improvements for better efficiency, reliability, and scalability. Stay updated on emerging technologies and recommend enhancements to the data platform. Skills Strong experience with Azure Data Factory , Databricks , and Azure Synapse Analytics . Proficiency in Python , SQL , and Spark . Hands-on experience with ETL/ELT processes and frameworks. Knowledge of data modeling, data warehousing, and data lake architectures. Familiarity with REST APIs, streaming data (Kafka, Event Hubs), and batch processing. Good To Have: Experience with tools like Azure Purview , Delta Lake , or similar governance frameworks. Understanding of CI/CD pipelines and DevOps tools like Azure DevOps or Terraform . Familiarity with data visualization tools like Power BI . Competency Analytical Thinking Clear and effective communication Time Management Team Collaboration Technical Proficiency Supervising Others Problem Solving Risk Management Organizing & Task Management Creativity/innovation Honesty/Integrity Education: Bachelors degree in Computer Science, Data Science, or a related field. 8+ years of experience in a data engineering or similar role.
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
noida, uttar pradesh
On-site
Genpact is a global professional services and solutions firm committed to delivering outcomes that help shape the future. With a team of over 125,000 individuals across 30+ countries, we are driven by curiosity, entrepreneurial agility, and a desire to create lasting value for our clients. Our purpose, the relentless pursuit of a world that works better for people, empowers us to serve and transform leading enterprises, including the Fortune Global 500, utilizing our deep business and industry knowledge, digital operations services, and expertise in data, technology, and AI. We are currently looking for a Principal Consultant - Data Scientist specializing in Azure Generative AI & Advanced Analytics. As a highly skilled and experienced professional, you will be responsible for developing and optimizing AI/ML models, analyzing complex datasets, and providing strategic recommendations for embedding models and Generative AI applications. Your role will be crucial in driving AI-driven insights and automation within our business. Responsibilities: - Collaborate with cross-functional teams to identify, analyze, and interpret complex datasets for actionable insights and data-driven decision-making. - Design, develop, and implement Generative AI solutions leveraging various platforms including AWS Bedrock, Azure OpenAI, Azure Machine Learning, and Cognitive Services. - Utilize Azure Document Intelligence to extract and process structured and unstructured data from diverse document sources. - Build and optimize data pipelines to efficiently process and analyze large-scale datasets. - Implement Agentic AI techniques to develop intelligent, autonomous systems capable of making decisions and taking actions. - Research, evaluate, and recommend embedding models, language models, and generative models for diverse business use cases. - Continuously monitor and assess the performance of AI models and data-driven solutions, refining and optimizing them as necessary. - Stay updated with the latest industry trends, tools, and technologies in data science, AI, and generative models to enhance existing solutions and develop new ones. - Mentor and guide junior team members to aid in their professional growth and skill development. - Ensure model explainability, fairness, and compliance with responsible AI principles. - Keep abreast of advancements in AI, ML, and data science and apply best practices to enhance business operations. Minimum Qualifications / Skills: - Bachelor's or Master's degree in Computer Science, Data Science, AI, Machine Learning, or a related field. - Experience in data science, machine learning, AI applications, generative AI prompt engineering, and creating custom models. - Proficiency in Python, TensorFlow, PyTorch, PySpark, Scikit-learn, and MLflow. - Hands-on experience with Azure AI services (Azure OpenAI, Azure Document Intelligence, Azure Machine Learning, Azure Synapse, Azure Data Factory, Data Bricks, RAG Pipeline). - Expertise in LLMs, transformer architectures, and embeddings. - Experience in building and optimizing end-to-end data pipelines. - Familiarity with vector databases, FAISS, Pinecone, and knowledge retrieval techniques. - Knowledge of Reinforcement Learning (RLHF), fine-tuning LLMs, and prompt engineering. - Strong analytical skills with the ability to translate business requirements into AI/ML solutions. - Excellent problem-solving, critical thinking, and communication skills. - Experience with cloud-native AI deployment, containerization (Docker, Kubernetes), and MLOps practices is advantageous. Preferred Qualifications / Skills: - Experience with multi-modal AI models and computer vision applications. - Exposure to LangChain, Semantic Kernel, RAG (Retrieval-Augmented Generation), and knowledge graphs. - Certifications in Microsoft Azure AI, Data Science, or ML Engineering. Job Title: Principal Consultant Location: India-Noida Schedule: Full-time Education Level: Bachelor's / Graduation / Equivalent Job Posting: Apr 11, 2025, 9:36:00 AM Unposting Date: May 11, 2025, 1:29:00 PM Master Skills List: Digital Job Category: Full Time,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
haryana
On-site
You will play a crucial role in meeting the requirements of key business functions by developing SQL code, Azure data pipelines, ETL processes, and data models. Your responsibilities will include crafting MS-SQL queries and procedures, generating customized reports, and aggregating data to the desired level for client consumption. Additionally, you will be tasked with database design, data extraction from diverse sources, data integration, and ensuring data stability, reliability, and performance. Your typical day will involve: - Demonstrating 2-3 years of experience as a SQL Developer or in a similar capacity - Possessing a strong grasp of SQL Server and SQL programming, with at least 2 years of hands-on SQL programming experience - Familiarity with SQL Server Integration Services (SSIS) - Preferred experience in implementing Data Factory pipelines for on-cloud ETL processing - Proficiency in Azure Data Factory, Azure Synapse, and ADLS, with the capability to configure and manage all aspects of SQL Server at a Consultant level - Showing a sense of ownership and pride in your work, understanding its impact on the company's success - Exhibiting excellent interpersonal and communication skills (both verbal and written), enabling clear and precise communication at various organizational levels - Demonstrating critical thinking and problem-solving abilities - Being a team player with good time-management skills - Experience in analytics projects within the pharma sector, focusing on deriving actionable insights and their implementation - Expertise in longitudinal data, retail/CPG, customer-level datasets, pharma data, patient data, forecasting, and performance reporting - Intermediate to strong proficiency in MS Excel and PowerPoint - Previous exposure to SQL Server and SSIS - Ability to efficiently handle large datasets (multi-million record complex relational databases) - Self-directed approach in supporting the data requirements of multiple teams, systems, and products - Effective communication in challenging situations with structured thinking and a solution-focused mindset, leading interactions with internal and external stakeholders with minimal supervision - Proactive identification of potential risks and implementation of mitigation strategies to prevent downstream issues - Familiarity with project management principles, including breaking down approaches into smaller tasks and planning resource allocation accordingly - Quick learning ability in a dynamic environment - Advantageous if you have successfully worked in a global environment - Prior experience in healthcare analytics is a bonus IQVIA is a prominent global provider of clinical research services, commercial insights, and healthcare intelligence to the life sciences and healthcare sectors. The company facilitates intelligent connections to expedite the development and commercialization of innovative medical treatments, ultimately enhancing patient outcomes and global population health. For further insights, visit https://jobs.iqvia.com.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
As a Data Engineer (Power BI) at Acronotics Limited, you will play a crucial role in designing and managing data pipelines that integrate Power BI, OLAP cubes, documents such as PDFs and presentations, and external data sources with Azure AI. Your primary responsibility will be to ensure that both structured and unstructured financial data is properly indexed and made accessible for semantic search and LLM application. Your key responsibilities in this full-time, on-site role based in Bengaluru will include extracting data from Power BI datasets, semantic models, and OLAP cubes. You will connect and transform data using Azure Synapse, Data Factory, and Lakehouse architecture. Additionally, you will preprocess PDFs, PPTs, and Excel files utilizing tools like Azure Form Recognizer or Python-based solutions. Your role will also involve designing data ingestion pipelines for external web sources, such as commodity prices, and collaborating with AI engineers to provide cleaned and contextual data for vector indexes. To be successful in this role, you should have a strong background in utilizing Power BI REST/XMLA APIs and expertise in OLAP systems like SSAS and SAP BW, data modeling, and ETL design. Hands-on experience with Azure Data Factory, Synapse, or Data Lake is essential, along with familiarity with JSON, DAX, and M queries. Join Acronotics Limited in revolutionizing businesses with cutting-edge robotic automation and artificial intelligence solutions. Let your expertise in data engineering contribute to the advancement of automated solutions that redefine how products are manufactured, marketed, and consumed. Discover the possibilities with Radium AI, our innovative product automating bot monitoring and support activities, on our website today.,
Posted 3 days ago
13.0 - 17.0 years
0 Lacs
maharashtra
On-site
Birlasoft is a powerhouse that brings together domain expertise, enterprise solutions, and digital technologies to redefine business processes. With a consultative and design thinking approach, we drive societal progress by enabling our customers to run businesses with efficiency and innovation. As part of the CK Birla Group, a multibillion-dollar enterprise, we have a team of 12,500+ professionals dedicated to upholding the Group's 162-year legacy. Our core values prioritize Diversity, Equity, and Inclusion (DEI) initiatives, along with Corporate Sustainable Responsibility (CSR) activities, demonstrating our commitment to building inclusive and sustainable communities. Join us in shaping a future where technology seamlessly aligns with purpose. As an Azure Tech PM at Birlasoft, you will be responsible for leading and delivering complex data analytics projects. With 13-15 years of experience, you will play a critical role in overseeing the planning, execution, and successful delivery of data analytics initiatives, while managing a team of 15+ skilled resources. You should have exceptional communication skills, a deep understanding of Agile methodologies, and a strong background in managing cross-functional teams in data analytics projects. Key Responsibilities: - Lead end-to-end planning, coordination, and execution of data analytics projects, ensuring adherence to project scope, timelines, and quality standards. - Guide the team in defining project requirements, objectives, and success criteria using your extensive experience in data analytics. - Apply Agile methodologies to create and maintain detailed project plans, sprint schedules, and resource allocation for efficient project delivery. - Manage a team of 15+ technical resources, fostering collaboration and a culture of continuous improvement. - Collaborate closely with cross-functional stakeholders to align project goals with business objectives. - Monitor project progress, identify risks, issues, and bottlenecks, and implement mitigation strategies. - Provide regular project updates to executive leadership, stakeholders, and project teams using excellent communication skills. - Facilitate daily stand-ups, sprint planning, backlog grooming, and retrospective meetings to promote transparency and efficiency. - Drive the implementation of best practices for data analytics, ensuring data quality, accuracy, and compliance with industry standards. - Act as a point of escalation for project-related challenges and work with the team to resolve issues promptly. - Collaborate with cross-functional teams to ensure successful project delivery, including testing, deployment, and documentation. - Provide input to project estimation, resource planning, and risk management activities. Mandatory Experience: - Technical Project Manager experience of minimum 5+ years in Data lake and Data warehousing (DW). - Strong understanding of DW process execution from acquiring data to visualization. - Exposure to Azure skills such as Azure ADF, Azure Databricks, Synapse, SQL, PowerBI for minimum 3+ years or experience in managing at least 2 end-to-end Azure Cloud projects. Other Qualifications: - Bachelor's or Master's degree in Computer Science, Information Systems, or related field. - 13-15 years of progressive experience in technical project management focusing on data analytics and data-driven initiatives. - In-depth knowledge of data analytics concepts, tools, and technologies. - Exceptional leadership, team management, interpersonal, and communication skills. - Demonstrated success in delivering data analytics projects on time, within scope, and meeting quality expectations. - Strong problem-solving skills and proactive attitude towards identifying challenges. - Project management certifications such as PMP, PMI-ACP, CSM would be an added advantage. - Ability to thrive in a dynamic and fast-paced environment, managing multiple projects simultaneously.,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
kochi, kerala
On-site
As a Data Architect at Beinex located in Kochi, Kerala, you will be responsible for collaborating with the Sales team to build RFPs, Pre-sales activities, Project Delivery, and support. Your role will involve delivering on-site technical engagements with customers, participating in pre-sales visits, understanding customer requirements, defining project timelines, and implementing solutions. Additionally, you will work on both on and off-site projects to assist customers in migrating from their existing data warehouses to Snowflake and other databases. You should have at least 8 years of experience in IT platform implementation, development, DBA, and Data Migration in Relational Database Management Systems (RDBMS). Furthermore, you should possess 5+ years of hands-on experience in implementing and performance tuning MPP databases. Proficiency in Snowflake, Redshift, Databricks, or Azure Synapse is essential, along with the ability to prioritize projects effectively. Experience in analyzing Data Warehouses such as Teradata, Netezza, Oracle, and SAP will be valuable in this role. Your responsibilities will also include designing database environments, analyzing production deployments, optimizing performance, writing SQL, stored procedures, conducting Data Validation and Data Quality tests, and planning migrations to Snowflake. You will be expected to possess strong communication skills, problem-solving abilities, and the capacity to work effectively both independently and as part of a team. At Beinex, you will have access to various perks including comprehensive health plans, learning and development opportunities, workation and outdoor training, a hybrid working environment, and on-site travel opportunities. Join us to be a part of a dynamic team and advance your career in a supportive and engaging work environment.,
Posted 3 days ago
10.0 - 14.0 years
0 Lacs
chennai, tamil nadu
On-site
We are searching for a Senior Data Engineer with significant experience in developing ETL processes utilizing PySpark Notebooks and Microsoft Fabric, as well as supporting existing legacy SQL Server environments. The perfect candidate will have a solid foundation in Spark-based development, showcase advanced SQL skills, and feel at ease working autonomously, collaboratively within a team, or guiding other developers when necessary, all while possessing excellent communication abilities. The ideal candidate will also demonstrate expertise with Azure Data Services, such as Azure Data Factory, Azure Synapse, or similar tools, familiarity with creating DAG's, implementing activities, and running Apache Airflow, and knowledge of DevOps practices, CI/CD pipelines, and Azure DevOps. Key Responsibilities: - Design, develop, and manage ETL Notebook orchestration pipelines utilizing PySpark and Microsoft Fabric. - Collaborate with data scientists, analysts, and stakeholders to grasp data requirements and provide effective data solutions. - Migrate and integrate data from legacy SQL Server environments into modern data platforms. - Optimize data pipelines and workflows for scalability, efficiency, and reliability. - Provide technical leadership and mentorship to junior developers and team members. - Troubleshoot and resolve complex data engineering issues related to performance, data quality, and system scalability. - Develop, maintain, and uphold data engineering best practices, coding standards, and documentation. - Conduct code reviews and offer constructive feedback to enhance team productivity and code quality. - Support data-driven decision-making processes by ensuring data integrity, availability, and consistency across different platforms. Qualifications: - Bachelors or Masters degree in Computer Science, Data Science, Engineering, or a related field. - 10+ years of experience in data engineering, focusing on ETL development using PySpark or other Spark-based tools. - Proficiency in SQL with extensive experience in complex queries, performance tuning, and data modeling. - Experience with Microsoft Fabric or similar cloud-based data integration platforms is advantageous. - Strong understanding of data warehousing concepts, ETL frameworks, and big data processing. - Familiarity with other data processing technologies (e.g., Hadoop, Hive, Kafka) is a plus. - Experience dealing with both structured and unstructured data sources. - Excellent problem-solving skills and the ability to troubleshoot complex data engineering issues. - Experience with Azure Data Services, including Azure Data Factory, Azure Synapse, or similar tools. - Experience of creating DAG's, implementing activities, and running Apache Airflow. - Familiarity with DevOps practices, CI/CD pipelines, and Azure DevOps. In conclusion, Aspire Systems is a global technology services firm that acts as a trusted technology partner for over 275 clients worldwide. Aspire collaborates with leading enterprises in Banking, Insurance, Retail, and ISVs to help them leverage technology for business transformation in the current digital era. The company's dedication to Attention. Always. reflects its commitment to providing care and attention to both its customers and employees. With over 4900 employees globally and a CMMI Level 3 certification, Aspire Systems operates in North America, LATAM, Europe, Middle East, and Asia Pacific. Aspire Systems has been consistently recognized as one of the Top 100 Best Companies to Work For by the Great Place to Work Institute for the 12th consecutive time. For more information about Aspire Systems, please visit https://www.aspiresys.com/.,
Posted 3 days ago
6.0 - 10.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You are an experienced Data Engineer with at least 6 years of relevant experience. In this role, you will be working as part of a team to develop Data and Analytics solutions. Your responsibilities will include participating in the development of cloud data warehouses, data as a service, and business intelligence solutions. You should be able to provide forward-thinking solutions in data integration and ensure the delivery of a quality product. Experience in developing Modern Data Warehouse solutions using Azure or AWS Stack is required. To be successful in this role, you should have a Bachelor's degree in computer science & engineering or equivalent demonstrable experience. It is desirable to have Cloud Certifications in Data, Analytics, or Ops/Architect space. Your primary skills should include: - 6+ years of experience as a Data Engineer, with a key/lead role in implementing large data solutions - Programming experience in Scala or Python, SQL - Minimum of 1 year of experience in MDM/PIM Solution Implementation with tools like Ataccama, Syndigo, Informatica - Minimum of 2 years of experience in Data Engineering Pipelines, Solutions implementation in Snowflake - Minimum of 2 years of experience in Data Engineering Pipelines, Solutions implementation in Databricks - Working knowledge of some AWS and Azure Services like S3, ADLS Gen2, AWS Redshift, AWS Glue, Azure Data Factory, Azure Synapse - Demonstrated analytical and problem-solving skills - Excellent written and verbal communication skills in English Your secondary skills should include familiarity with Agile Practices, Version control platforms like GIT, CodeCommit, problem-solving skills, ownership mentality, and a proactive approach rather than reactive. This is a permanent position based in Trivandrum/Bangalore. If you meet the requirements and are looking for a challenging opportunity in the field of Data Engineering, we encourage you to apply before the close date on 11-10-2024.,
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The Azure Synapse job market in India is currently experiencing a surge in demand as organizations increasingly adopt cloud solutions for their data analytics and business intelligence needs. With the growing reliance on data-driven decision-making, professionals with expertise in Azure Synapse are highly sought after in the job market.
The average salary range for Azure Synapse professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15-20 lakhs per annum.
A typical career path in Azure Synapse may include roles such as Junior Developer, Senior Developer, Tech Lead, and Architect. As professionals gain experience and expertise in the platform, they can progress to higher-level roles with more responsibilities and leadership opportunities.
In addition to expertise in Azure Synapse, professionals in this field are often expected to have knowledge of SQL, data warehousing concepts, ETL processes, data modeling, and cloud computing principles. Strong analytical and problem-solving skills are also essential for success in Azure Synapse roles.
As the demand for Azure Synapse professionals continues to rise in India, now is the perfect time to upskill and prepare for exciting career opportunities in this field. By honing your expertise in Azure Synapse and related skills, you can position yourself as a valuable asset in the job market and embark on a rewarding career journey. Prepare diligently, showcase your skills confidently, and seize the numerous job opportunities waiting for you in the Azure Synapse domain. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough