Home
Jobs
Companies
Resume

996 Adf Jobs - Page 5

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D And A) – Azure Data Engineer - Senior As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop & deploy big data pipelines in a cloud environment using Azure Cloud services ETL design, development and migration of existing on-prem ETL routines to Cloud Service Interact with senior leaders, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success Overall 3+ years of IT experience with 2+ Relevant experience in Azure Data Factory (ADF) and good hands-on with Exposure to latest ADF Version Hands-on experience on Azure functions & Azure synapse (Formerly SQL Data Warehouse) Should have project experience in Azure Data Lake / Blob (Storage purpose) Should have basic understanding on Batch Account configuration, various control options Sound knowledge in Data Bricks & Logic Apps Should be able to coordinate independently with business stake holders and understand the business requirements, implement the requirements using ADF To qualify for the role, you must have Be a computer science graduate or equivalent with 3-7 years of industry experience Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Ideally, you’ll also have Client management skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D And A) – Azure Data Engineer - Senior As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop & deploy big data pipelines in a cloud environment using Azure Cloud services ETL design, development and migration of existing on-prem ETL routines to Cloud Service Interact with senior leaders, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success Overall 3+ years of IT experience with 2+ Relevant experience in Azure Data Factory (ADF) and good hands-on with Exposure to latest ADF Version Hands-on experience on Azure functions & Azure synapse (Formerly SQL Data Warehouse) Should have project experience in Azure Data Lake / Blob (Storage purpose) Should have basic understanding on Batch Account configuration, various control options Sound knowledge in Data Bricks & Logic Apps Should be able to coordinate independently with business stake holders and understand the business requirements, implement the requirements using ADF To qualify for the role, you must have Be a computer science graduate or equivalent with 3-7 years of industry experience Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Ideally, you’ll also have Client management skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 4 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

Remote

Linkedin logo

Role: Senior Data Engineer with Databricks. Experience: 5+ Years Job Type: Contract Contract Duration: 6 Months Budget: 1.0 lakh per month Location : Remote JOB DESCRIPTION: We are looking for a dynamic and experienced Senior Data Engineer – Databricks to design, build, and optimize robust data pipelines using the Databricks Lakehouse platform. The ideal candidate should have strong hands-on skills in Apache Spark, PySpark, cloud data services, and a good grasp of Python and Java. This role involves close collaboration with architects, analysts, and developers to deliver scalable and high-performing data solutions across AWS, Azure, and GCP. ESSENTIAL JOB FUNCTIONS 1. Data Pipeline Development • Build scalable and efficient ETL/ELT workflows using Databricks and Spark for both batch and streaming data. • Leverage Delta Lake and Unity Catalog for structured data management and governance. • Optimize Spark jobs by tuning configurations, caching, partitioning, and serialization techniques. 2. Cloud-Based Implementation • Develop and deploy data workflows onAWS (S3, EMR,Glue), Azure (ADLS, ADF, Synapse), and/orGCP (GCS, Dataflow, BigQuery). • Manage and optimize data storage, access control, and pipeline orchestration using native cloud tools. • Use tools like Databricks Auto Loader and SQL Warehousing for efficient data ingestion and querying. 3. Programming & Automation • Write clean, reusable, and production-grade code in Python and Java. • Automate workflows using orchestration tools(e.g., Airflow, ADF, or Cloud Composer). • Implement robust testing, logging, and monitoring mechanisms for data pipelines. 4. Collaboration & Support • Collaborate with data analysts, data scientists, and business users to meet evolving data needs. • Support production workflows, troubleshoot failures, and resolve performance bottlenecks. • Document solutions, maintain version control, and follow Agile/Scrum processes Required Skills Technical Skills: • Databricks: Hands-on experience with notebooks, cluster management, Delta Lake, Unity Catalog, and job orchestration. • Spark: Expertise in Spark transformations, joins, window functions, and performance tuning. • Programming: Strong in PySpark and Java, with experience in data validation and error handling. • Cloud Services: Good understanding of AWS, Azure, or GCP data services and security models. • DevOps/Tools: Familiarity with Git, CI/CD, Docker (preferred), and data monitoring tools. Experience: • 5–8 years of data engineering or backend development experience. • Minimum 1–2 years of hands-on work in Databricks with Spark. • Exposure to large-scale data migration, processing, or analytics projects. Certifications (nice to have): Databricks Certified Data Engineer Associate Working Conditions Hours of work - Full-time hours; Flexibility for remote work with ensuring availability during US Timings. Overtime expectations - Overtime may not be required as long as the commitment is accomplished Work environment - Primarily remote; occasional on-site work may be needed only during client visit. Travel requirements - No travel required. On-call responsibilities - On-call duties during deployment phases. Special conditions or requirements - Not Applicable. Workplace Policies and Agreements Confidentiality Agreement: Required to safeguard client sensitive data. Non-Compete Agreement: Must be signed to ensure proprietary model security. Non-Disclosure Agreement: Must be signed to ensure client confidentiality and security. Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D And A) – Azure Data Engineer - Senior As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop & deploy big data pipelines in a cloud environment using Azure Cloud services ETL design, development and migration of existing on-prem ETL routines to Cloud Service Interact with senior leaders, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success Overall 3+ years of IT experience with 2+ Relevant experience in Azure Data Factory (ADF) and good hands-on with Exposure to latest ADF Version Hands-on experience on Azure functions & Azure synapse (Formerly SQL Data Warehouse) Should have project experience in Azure Data Lake / Blob (Storage purpose) Should have basic understanding on Batch Account configuration, various control options Sound knowledge in Data Bricks & Logic Apps Should be able to coordinate independently with business stake holders and understand the business requirements, implement the requirements using ADF To qualify for the role, you must have Be a computer science graduate or equivalent with 3-7 years of industry experience Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Ideally, you’ll also have Client management skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D And A) – Azure Data Engineer - Senior As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Develop & deploy big data pipelines in a cloud environment using Azure Cloud services ETL design, development and migration of existing on-prem ETL routines to Cloud Service Interact with senior leaders, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success Overall 3+ years of IT experience with 2+ Relevant experience in Azure Data Factory (ADF) and good hands-on with Exposure to latest ADF Version Hands-on experience on Azure functions & Azure synapse (Formerly SQL Data Warehouse) Should have project experience in Azure Data Lake / Blob (Storage purpose) Should have basic understanding on Batch Account configuration, various control options Sound knowledge in Data Bricks & Logic Apps Should be able to coordinate independently with business stake holders and understand the business requirements, implement the requirements using ADF To qualify for the role, you must have Be a computer science graduate or equivalent with 3-7 years of industry experience Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Ideally, you’ll also have Client management skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 4 days ago

Apply

8.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Technical Skills: 8+ years of hands-on experience in SQL development, query optimization, and performance tuning. Expertise in ETL tools (SSIS, Azure ADF, Databricks, Snowflake or similar) and relational databases (SQL Server, PostgreSQL, MySQL, Oracle). Strong understanding of data warehousing concepts, data modeling, indexing strategies, and query execution plans. Proficiency in writing efficient stored procedures, views, triggers, and functions for large datasets. Experience working with structured and semi-structured data (CSV, JSON, XML, Parquet). Hands-on experience in data validation, cleansing, and reconciliation to maintain high data quality. Exposure to real-time and batch data processing techniques. Nice-to-have: Experience with Azure/Other Data Engineering (ADF, Azure SQL, Synapse, Databricks, Snowflake), Python, Spark, NoSQL databases, and reporting tools like Power BI or Tableau. Strong problem-solving skills and the ability to troubleshoot ETL failures and performance issues. Ability to collaborate with business and analytics teams to understand and implement data requirements. Show more Show less

Posted 4 days ago

Apply

0 years

0 Lacs

Bengaluru East, Karnataka, India

On-site

Linkedin logo

Technology->Cloud Integration->Azure Data Factory (ADF) Technology->Cloud Platform->Azure Analytics Services->Azure Data Lake Technology->Cloud Platform->Power Platform Technology->IOT Platform->AWS IOT A day in the life of an Infoscion As part of the Infosys delivery team, your primary role would be to interface with the client for quality assurance, issue resolution and ensuring high customer satisfaction. You will understand requirements, create and review designs, validate the architecture and ensure high levels of service offerings to clients in the technology domain. You will participate in project estimation, provide inputs for solution delivery, conduct technical risk planning, perform code reviews and unit test plan reviews. You will lead and guide your teams towards developing optimized high quality code deliverables, continual knowledge management and adherence to the organizational guidelines and processes. You would be a key contributor to building efficient programs/ systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! If you think you fit right in to help our clients navigate their next in their digital transformation journey, this is the place for you! Knowledge of more than one technology Basics of Architecture and Design fundamentals Knowledge of Testing tools Knowledge of agile methodologies Understanding of Project life cycle activities on development and maintenance projects Understanding of one or more Estimation methodologies, Knowledge of Quality processes Basics of business domain to understand the business requirements Analytical abilities, Strong Technical Skills, Good communication skills Good understanding of the technology and domain Ability to demonstrate a sound understanding of software quality assurance principles, SOLID design principles and modelling methods Awareness of latest technologies and trends Excellent problem solving, analytical and debugging skills Show more Show less

Posted 4 days ago

Apply

3.0 years

3 - 7 Lacs

Hyderābād

On-site

Engineer - Oracle Fusion Tech/OIC Should have at least 3 years of experience in implementing Oracle ERP and OIC projects Proven experience with Oracle Integration Cloud (OIC) and expertise in Oracle Cloud services. Strong knowledge of integration technologies, protocols, and standards (REST, SOAP, JSON, XML, etc.). Must work with PLSQL, Web Services, SOAP and REST. Development experience on SOAP, REST services with JSON or XML based integrations. Experience in working with OIC adapters included ERP Adapter, SOAP, Oracle DB, FTP, REST etc. Being Hands-On is a critical requirement. Should have experience in developing BI, OTBI Reports. Should be capable of providing optimal solution and building integrations both inbound and outbound - SOAP and Rest Web Services, FBDI, event based and ADF DI Should be able to develop reports and OIC services based on the requirements/design documents Should have experience in building integrations with Oracle SaaS applications. Follow the project through to the successful adoption of the solution Good understanding of Oracle PaaS architecture and security concepts and working experience in ATP and object storage • Understanding of Finance modules flow is necessary. • Comfortable in XML and XSLT processing. Experience in working with OIC adapters included ERP Adapter, SOAP, Oracle DB, FTP, REST Hands on experience with web service testing tools SOAPUI, Postman. Bachelors/ Master’s degree in computer science or Software Engineering

Posted 4 days ago

Apply

2.0 years

0 Lacs

Hyderābād

On-site

Overview: Data Science Team works in developing Machine Learning (ML) and Artificial Intelligence (AI) projects. Specific scope of this role is to develop ML solution in support of ML/AI projects using big analytics toolsets in a CI/CD environment. Analytics toolsets may include DS tools/Spark/Databricks, and other technologies offered by Microsoft Azure or open-source toolsets. This role will also help automate the end-to-end cycle with Azure Pipelines. You will be part of a collaborative interdisciplinary team around data, where you will be responsible of our continuous delivery of statistical/ML models. You will work closely with process owners, product owners and final business users. This will provide you the correct visibility and understanding of criticality of your developments. Responsibilities: Delivery of key Advanced Analytics/Data Science projects within time and budget, particularly around DevOps/MLOps and Machine Learning models in scope Active contributor to code & development in projects and services Partner with data engineers to ensure data access for discovery and proper data is prepared for model consumption. Partner with ML engineers working on industrialization. Communicate with business stakeholders in the process of service design, training and knowledge transfer. Support large-scale experimentation and build data-driven models. Refine requirements into modelling problems. Influence product teams through data-based recommendations. Research in state-of-the-art methodologies. Create documentation for learnings and knowledge transfer. Create reusable packages or libraries. Ensure on time and on budget delivery which satisfies project requirements, while adhering to enterprise architecture standards Leverage big data technologies to help process data and build scaled data pipelines (batch to real time) Implement end-to-end ML lifecycle with Azure Databricks and Azure Pipelines Automate ML models deployments Qualifications: BE/B.Tech in Computer Science, Maths, technical fields. Overall 2-4 years of experience working as a Data Scientist. 2+ years’ experience building solutions in the commercial or in the supply chain space. 2+ years working in a team to deliver production level analytic solutions. Fluent in git (version control). Understanding of Jenkins, Docker are a plus. Fluent in SQL syntaxis. 2+ years’ experience in Statistical/ML techniques to solve supervised (regression, classification) and unsupervised problems. 2+ years’ experience in developing business problem related statistical/ML modeling with industry tools with primary focus on Python or Pyspark development. Data Science – Hands on experience and strong knowledge of building machine learning models – supervised and unsupervised models. Knowledge of Time series/Demand Forecast models is a plus Programming Skills – Hands-on experience in statistical programming languages like Python, Pyspark and database query languages like SQL Statistics – Good applied statistical skills, including knowledge of statistical tests, distributions, regression, maximum likelihood estimators Cloud (Azure) – Experience in Databricks and ADF is desirable Familiarity with Spark, Hive, Pig is an added advantage Business storytelling and communicating data insights in business consumable format. Fluent in one Visualization tool. Strong communications and organizational skills with the ability to deal with ambiguity while juggling multiple priorities Experience with Agile methodology for team work and analytics ‘product’ creation. Experience in Reinforcement Learning is a plus. Experience in Simulation and Optimization problems in any space is a plus. Experience with Bayesian methods is a plus. Experience with Causal inference is a plus. Experience with NLP is a plus. Experience with Responsible AI is a plus. Experience with distributed machine learning is a plus Experience in DevOps, hands-on experience with one or more cloud service providers AWS, GCP, Azure(preferred) Model deployment experience is a plus Experience with version control systems like GitHub and CI/CD tools Experience in Exploratory data Analysis Knowledge of ML Ops / DevOps and deploying ML models is preferred Experience using MLFlow, Kubeflow etc. will be preferred Experience executing and contributing to ML OPS automation infrastructure is good to have Exceptional analytical and problem-solving skills Stakeholder engagement-BU, Vendors. Experience building statistical models in the Retail or Supply chain space is a plus

Posted 4 days ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Family Data Science & Analysis (India) Travel Required None Clearance Required None What You Will Do Design, develop, and maintain robust, scalable, and efficient data pipelines and ETL/ELT processes. Lead and execute data engineering projects from inception to completion, ensuring timely delivery and high quality. Build and optimize data architectures for operational and analytical purposes. Collaborate with cross-functional teams to gather and define data requirements. Implement data quality, data governance, and data security practices. Manage and optimize cloud-based data platforms (Azure\AWS). Develop and maintain Python/PySpark libraries for data ingestion, Processing and integration with both internal and external data sources. Design and optimize scalable data pipelines using Azure data factory and Spark(Databricks) Work with stakeholders, including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Develop frameworks for data ingestion, transformation, and validation. Mentor junior data engineers and guide best practices in data engineering. Evaluate and integrate new technologies and tools to improve data infrastructure. Ensure compliance with data privacy regulations (HIPAA, etc.). Monitor performance and troubleshoot issues across the data ecosystem. Automated deployment of data pipelines using GIT hub actions \ Azure devops What You Will Need Bachelors or master’s degree in computer science, Information Systems, Statistics, Math, Engineering, or related discipline. Minimum 5 + years of solid hands-on experience in data engineering and cloud services. Extensive working experience with advanced SQL and deep understanding of SQL. Good Experience in Azure data factory (ADF), Databricks , Python and PySpark. Good experience in modern data storage concepts data lake, lake house. Experience in other cloud services (AWS) and data processing technologies will be added advantage. Ability to enhance , develop and resolve defects in ETL process using cloud services. Experience handling large volumes (multiple terabytes) of incoming data from clients and 3rd party sources in various formats such as text, csv, EDI X12 files and access database. Experience with software development methodologies (Agile, Waterfall) and version control tools Highly motivated, strong problem solver, self-starter, and fast learner with demonstrated analytic and quantitative skills. Good communication skill. What Would Be Nice To Have AWS ETL Platform – Glue , S3 One or more programming languages such as Java, .Net Experience in US health care domain and insurance claim processing. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee. Show more Show less

Posted 4 days ago

Apply

10.0 - 15.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

About the company: With over 2.5 crore customers, over 5,000 distribution points and nearly 2,000 branches, IndusInd Bank is a universal bank with a widespread banking footprint across the country. IndusInd offers a wide array of products and services for individuals and corporates including microfinance, personal loans, personal and commercial vehicles loans, credit cards, SME loans. Over the years, IndusInd has grown ceaselessly and dynamically, driven by zeal to offer our customers banking services at par with the highest quality standards in the industry. IndusInd is a pioneer in digital first solutions to bring together the power of next-gen digital product stack, customer excellence and trust of an established bank. Job Purpose: To work on implementing data modeling solutions To design data flow and structure to reduce data redundancy and improving data movement among systems defining a data lineage To work in the Azure Data Warehouse To work with large data volume of data integration Experience With overall experience between 10 to 15 years, applicant must have minimum 8 to 11 years of hard core professional experience in data modeling for large Data Warehouse with multiple Sources. Technical Skills Expertise in core skill of data modeling principles/methods including conceptual, logical & physical Data Models Ability to utilize BI tools like Power BI, Tableau, etc to represent insights Experience in translating/mapping relational data models into XML and Schemas Expert knowledge of metadata management, relational & data modeling tools like ER Studio, Erwin or others. Hands-on experience in relational, dimensional and/or analytical experience (using RDBMS, dimensional, NoSQL, ETL and data ingestion protocols). Very strong in SQL queries Expertise in performance tuning of SQL queries. Ability to analyse source system and create Source to Target mapping. Ability to understand the business use case and create data models or joined data in Datawarehouse. Preferred experience in banking domain and experience in building data models/marts for various banking functions. Good to have knowledge of – -Azure powershell scripting or Python scripting for data transformation in ADF - SSIS, SSAS, BI tools like Power BI -Azure PaaS components like Azure Data Factory, Azure Data Bricks, Azure Data Lake, Azure Synapse (DWH), Polybase, ExpressRoute tunneling, etc. -API integration Responsibility Understanding the existing data model, existing data warehouse design, functional domain subject areas of data, documenting the same with as is architecture and proposed one. Understanding existing ETL process, various sources and analyzing, documenting the best approach to design logical data model where required Work with development team to implement the proposed data model into physical data model, build data flows Work with development team to optimize the database structure with best practices applying optimization methods. Analyze, document and implement to re-use of data model for new initiatives. Will interact with stakeholder, Users, other IT teams to understand the eco system and analyze for solutions Work on user requirements and create queries for creating consumption views for users from the existing DW data. Will train and lead a small team of data engineers. Qualifications Bachelors of Computer Science or Equivalent Should have certification done on Data Modeling and Data Analyst. Good to have a certification of Azure Fundamental and Azure Engineer courses (AZ900 or DP200/201) Behavioral Competencies Should have excellent problem-solving and time management skills Strong analytical thinking skills Applicant should have excellent communication skill and process oriented with flexible execution mindset. Strategic Thinking with Research and Development mindset. Clear and demonstrative communication Efficiently identify and solves issues Identify, track and escalate risks in a timely manner Selection Process: Interested Candidates are mandatorily required to apply through the listing on Jigya. Only applications received through Jigya will be evaluated further. Shortlisted candidates may need to appear in an Online Assessment and/or a Technical Screening interview administered by Jigya, on behalf on IndusInd Bank Candidates selected after the screening rounds will be processed further by IndusInd Bank Show more Show less

Posted 4 days ago

Apply

10.0 - 14.0 years

0 Lacs

Gurugram, Haryana, India

On-site

Linkedin logo

Hiring: CRM Lead Consultant – Microsoft Dynamics 365 CE/CRM Looking for an experienced CRM Lead Consultant to serve as a technical SME and administrator for Microsoft Dynamics 365 CE/CRM platform. This role is ideal for a highly skilled professional with deep experience in Dynamics customization, integration, reporting, and solution management. 🔧 What You’ll Do Lead development and maintenance of the Dynamics CRM platform Collaborate with business users to gather requirements and architect CRM solutions Build forms, views, dashboards, plugins, workflows, and reports Develop solutions using PowerApps , Azure Data Factory , and automation tools Perform solution deployments and manage GitHub source control Troubleshoot issues and support application performance ✅ What We’re Looking For 10-14 years of experience in Microsoft Dynamics 365 CE/CRM Proficiency in JavaScript, C#, .NET, SQL Server, MVC, FetchXML, REST/OData Hands-on experience with Azure services (ADF, SSIS, DevOps pipelines) Strong knowledge of CRM SDK, security models, and GitHub Bachelor's degree in Computer Science or related STEM field ⭐ Bonus Points Microsoft Dynamics 365 certifications Familiarity with O365 tools (SharePoint, Mobile), Azure SQL, Data Export Service Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

About tsworks: tsworks is a leading technology innovator, providing transformative products and services designed for the digital-first world. Our mission is to provide domain expertise, innovative solutions and thought leadership to drive exceptional user and customer experiences. Demonstrating this commitment , we have a proven track record of championing digital transformation for industries such as Banking, Travel and Hospitality, and Retail (including e-commerce and omnichannel), as well as Distribution and Supply Chain, delivering impactful solutions that drive efficiency and growth. We take pride in fostering a workplace where your skills, ideas, and attitude shape meaningful customer engagements. About This Role: tsworks Technologies India Private Limited is seeking driven and motivated Senior Data Engineers to join its Digital Services Team. You will get hands-on experience with projects employing industry-leading technologies. This would initially be focused on the operational readiness and maintenance of existing applications and would transition into a build and maintenance role in the long run. Requirements Position: Data Engineer II Experience: 3 to 10+ Years Location: Bangalore, India Mandatory Required Qualification Strong proficiency in Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage, etc. Expertise in DevOps and CI/CD implementation Good knowledge in SQL Excellent Communication Skills In This Role, You Will Design, implement, and manage scalable and efficient data architecture on the Azure cloud platform. Develop and maintain data pipelines for efficient data extraction, transformation, and loading (ETL) processes. Perform complex data transformations and processing using Azure Data Factory, Azure Databricks, Snowflake's data processing capabilities, or other relevant tools. Develop and maintain data models within Snowflake and related tools to support reporting, analytics, and business intelligence needs. Collaborate with cross-functional teams to understand data requirements and design appropriate data integration solutions. Integrate data from various sources, both internal and external, ensuring data quality and consistency. Ensure data models are designed for scalability, reusability, and flexibility. Implement data quality checks, validations, and monitoring processes to ensure data accuracy and integrity across Azure and Snowflake environments. Adhere to data governance standards and best practices to maintain data security and compliance. Handling performance optimization in ADF and Snowflake platforms Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver actionable insights Provide guidance and mentorship to junior team members to enhance their technical skills. Maintain comprehensive documentation for data pipelines, processes, and architecture within both Azure and Snowflake environments including best practices, standards, and procedures. Skills & Knowledge Bachelor's or Master's degree in Computer Science, Engineering, or a related field. 3 + Years of experience in Information Technology, designing, developing and executing solutions. 3+ Years of hands-on experience in designing and executing data solutions on Azure cloud platforms as a Data Engineer. Strong proficiency in Azure services such as Azure Data Factory, Azure Databricks, Azure Synapse Analytics, Azure Storage, etc. Familiarity with Snowflake data platform would be an added advantage. Hands-on experience in data modelling, batch and real-time pipelines, using Python, Java or JavaScript and experience working with Restful APIs are required. Expertise in DevOps and CI/CD implementation. Hands-on experience with SQL and NoSQL databases. Hands-on experience in data modelling, implementation, and management of OLTP and OLAP systems. Experience with data modelling concepts and practices. Familiarity with data quality, governance, and security best practices. Knowledge of big data technologies such as Hadoop, Spark, or Kafka. Familiarity with machine learning concepts and integration of ML pipelines into data workflows Hands-on experience working in an Agile setting. Is self-driven, naturally curious, and able to adapt to a fast-paced work environment. Can articulate, create, and maintain technical and non-technical documentation. Public cloud certifications are desired. Show more Show less

Posted 4 days ago

Apply

6.0 years

0 Lacs

India

Remote

Linkedin logo

Job Title: Senior Data Engineer Experience: 6+ Years Location: Remote Employment Type: Full Time Job Summary: We are seeking a highly skilled and experienced Senior Data Engineer to join our dynamic data engineering team. The ideal candidate will have deep expertise in C#, Azure Data Factory (ADF), Databricks, SQL Server, and Python, along with a strong understanding of modern CI/CD practices. You will be responsible for designing, developing, and maintaining scalable and efficient data pipelines and solutions to support analytics, reporting, and operational systems. Key Responsibilities: Design, develop, and optimize complex data pipelines using Azure Data Factory, Databricks, and SQL Server. Show more Show less

Posted 4 days ago

Apply

3.0 years

0 Lacs

Ludhiana, Punjab, India

On-site

Linkedin logo

Job Purpose To develop and optimize knitting programs for STOLL flat knitting machines (CMS, ADF series) using M1plus software, ensuring timely sample and production readiness with precision, innovation, and minimal errors. Key Responsibilities Programming & Pattern Development Create, edit, and simulate knitting programs using STOLL M1plus software. Translate tech packs, sketches, and knit designs into machine-readable code. Develop patterns, textures, intarsia, jacquards, structures and engineered panels as per design. Work closely with designers and merchandisers to interpret aesthetics technically. Sampling & Production Support Execute knitting trials and finalize programs for sampling & bulk. Fine-tune machine settings (gauge, tension, yarn paths) in coordination with senior operators. Document and maintain archives of all programs with fabric specs, yarn details, and machine settings. Quality & Troubleshooting Evaluate the knitted panels for defects, yarn compatibility, and program accuracy. Revise or troubleshoot patterns in case of loop distortion, miss-knit, or dimensional issues. Coordinate with the Quality team to implement correct shrinkage, GSM, and measurement protocols. Collaboration & Training Support and train junior programmers or interns. Coordinate with operators for smooth handover and machine setup guidance. Participate in innovation sessions for new yarns, stitches, or techniques. Required Skills & Knowledge Proficiency in STOLL M1plus software (must-have). Knowledge of CMS machine series & ADF (Autarkic Direct Feed) is preferred. Understanding of yarn types, knitting structures, and garment construction. Ability to read tech packs, spec sheets, and design layouts. Detail-oriented with logical, structured programming abilities. Familiarity with knitting-related machine settings and gauge variants (3, 5, 7, 12, 14 GG etc.). Preferred Qualifications Degree / Diploma in Textile Engineering, Knitwear Design, or Apparel Technology. Minimum 3 years of experience in a flat knitting setup. Exposure to both domestic and export market requirements. Bonus: Experience in Knit & Wear, 3D fully fashioned garments, or Technical textiles. Show more Show less

Posted 5 days ago

Apply

10.0 years

0 Lacs

Ludhiana, Punjab, India

On-site

Linkedin logo

Job Purpose To upskill the knitting team—programmers, operators, and interns—on Stoll flat knitting machines (CMS, ADF series) through structured, hands-on training in machine operation, program understanding, and best practices in knitted garment production. Key Responsibilities Conduct periodic on-site training sessions for: Stoll machine operation and handling M1plus programming fundamentals and advanced techniques Program-to-machine coordination and troubleshooting Train operators and programmers to understand different knit structures (e.g., Piqué, Links-Links, Ribs, Jacquard, Intarsia). Review and improve existing workflows and operator efficiency in sample and bulk knitting. Assess skill gaps and tailor training modules accordingly. Create easy-to-understand SOPs and visual training guides for reference. Support during implementation of new technologies, yarns, or machines. Advise management on skill development, training materials, or hiring needs in technical knitting. Required Skills & Expertise Minimum 7–10 years experience with Stoll CMS/ADF series machines. Proficient in M1plus software for programming and simulation. Hands-on understanding of both sampling and production processes. Strong ability to explain technical concepts clearly to semi-skilled workers. Experience developing or delivering workshop-based training sessions. Engagement Terms Frequency of Visit: As per mutual discussion Session Duration: As per mutual discussion Show more Show less

Posted 5 days ago

Apply

0 years

0 Lacs

Ludhiana, Punjab, India

On-site

Linkedin logo

Job Purpose To ensure efficient, high-quality operation of Stoll knitting machines (CMS, ADF, etc.), handle machine settings, minor maintenance, and support the sampling and production process with deep technical knowledge and leadership skills. Key Responsibilities Machine Handling & Setup Operate flat knitting machines (STOLL CMS/ADF series). Perform machine setting, gauge and cam adjustments. Change needle beds, setting yarn feeders, and checking yarn paths. Conduct trials for new yarns and designs with appropriate tension and programming settings. Knitting Execution Run production and sample programs as per tech pack/merchandiser instructions. Monitor in-progress knitting for defects (missed stitches, holes, stripes, yarn breakage). Achieve production targets with minimal downtime and waste. Quality Control & Maintenance Inspect panels for quality and measurements before handing over to linking team. Do regular cleaning and basic preventive maintenance. Report major mechanical/electrical faults to maintenance promptly. Programming Coordination Coordinate with programmers for understanding new patterns or troubleshooting. Suggest improvements in knitting techniques, yarn selection, or settings. Team Leadership & Training Guide and support junior operators/helpers. Maintain discipline and workflow within assigned machines. Assist in onboarding and training of interns or fresh operators. Documentation & Reporting Maintain production logs, downtime reasons, and daily efficiency reports. Flag any raw material (yarn) or tech pack-related issues. Skills & Competencies Expert knowledge of flatbed knitting machines (STOLL – CMS/ADF). Ability to read and interpret knitting programs, technical designs. Hands-on problem-solving skills. Team leadership and communication. Basic understanding of knitting yarns (wool, cotton, synthetics, blends). Focused on quality and timely output. Show more Show less

Posted 5 days ago

Apply

3.0 years

0 Lacs

India

Remote

Linkedin logo

Title: Azure Data Engineer Location: Remote Employment type: Full Time with BayOne We’re looking for a skilled and motivated Data Engineer to join our growing team and help us build scalable data pipelines, optimize data platforms, and enable real-time analytics. What You'll Do Design, develop, and maintain robust data pipelines using tools like Databricks, PySpark, SQL, Fabric, and Azure Data Factory Collaborate with data scientists, analysts, and business teams to ensure data is accessible, clean, and actionable Work on modern data lakehouse architectures and contribute to data governance and quality frameworks Tech Stack Azure | Databricks | PySpark | SQL What We’re Looking For 3+ years experience in data engineering or analytics engineering Hands-on with cloud data platforms and large-scale data processing Strong problem-solving mindset and a passion for clean, efficient data design Job Description: Min 3 years of experience in modern data engineering/data warehousing/data lakes technologies on cloud platforms like Azure, AWS, GCP, Data Bricks etc. Azure experience is preferred over other cloud platforms. 5 years of proven experience with SQL, schema design and dimensional data modelling Solid knowledge of data warehouse best practices, development standards and methodologies Experience with ETL/ELT tools like ADF, Informatica, Talend etc., and data warehousing technologies like Azure Synapse, Microsoft Fabric, Azure SQL, Amazon redshift, Snowflake, Google Big Query etc. Strong experience with big data tools (Databricks, Spark etc..) and programming skills in PySpark and Spark SQL. Be an independent self-learner with “let’s get this done” approach and ability to work in Fast paced and Dynamic environment. Excellent communication and teamwork abilities. Nice-to-Have Skills: Event Hub, IOT Hub, Azure Stream Analytics, Azure Analysis Service, Cosmo DB knowledge. SAP ECC /S/4 and Hana knowledge. Intermediate knowledge on Power BI Azure DevOps and CI/CD deployments, Cloud migration methodologies and processes BayOne is an Equal Opportunity Employer and does not discriminate against any employee or applicant for employment because of race, color, sex, age, religion, sexual orientation, gender identity, status as a veteran, and basis of disability or any federal, state, or local protected class. This job posting represents the general duties and requirements necessary to perform this position and is not an exhaustive statement of all responsibilities, duties, and skills required. Management reserves the right to revise or alter this job description. Show more Show less

Posted 5 days ago

Apply

0.0 - 10.0 years

0 Lacs

Thane, Maharashtra

On-site

Indeed logo

202503220 Thane, Maharashtra, India Bevorzugt Description Summary of Role: We are seeking a Senior Full Stack Developer with 8–10 years of experience working with the Microsoft technology stack and has experience with Python or other tech stack. The ideal candidate should have deep expertise in .NET Core, Python, C#, SQL Server, Azure Cloud Services, Angular, and other Microsoft-based development frameworks. This role involves full-cycle application development, including frontend, backend, database, and cloud integration, to build scalable, high-performance solutions The Role: Full Stack Development Develop, optimize, and maintain applications using .NET Core, C#, , ASP.NET and Azure functions. Design and implement responsive frontend UI using Angular. Build and maintain RESTful APIs for seamless data exchange. Develop solutions in Python, LangChain, LangGraph Work with connectors, AI Builder, and RPA to extend capabilities. Database & Cloud Services Design and manage SQL Server databases, ensuring performance and security Develop cloud-native applications leveraging Azure services such as Azure Functions, App Services, and Azure SQL Implement data storage solutions using Cosmos DB or Dataverse if required. Architecture & Integration Define and implement scalable, secure, and high-performing architecture Integrate applications with Microsoft 365, Power Platform, SharePoint, and other third-party services Optimize backend services for high-availability and low-latency performance Security & Best Practices Ensure secure coding practices, compliance, and role-based access control Implement DevOps, CI/CD pipelines, and automated deployment strategies. Follow Microsoft best practices for application security and performance. Collaboration & Leadership Work closely with business teams, architects, and UI/UX designers to deliver high-quality applications. Mentor junior developers and contribute to code reviews, design discussions, and technical improvements Stay updated with Microsoft technologies, frameworks, and industry trends The Requirements: Bachelor’s degree in information technology or related field is required 8–10 years of experience in full stack development using Microsoft technologies. Strong expertise in .NET Core, C#, ASP.NET MVC/Web API, Angular, and SQL Server. Experience with Azure Cloud Services (Azure Functions, AI Builder, App Services, Azure SQL, ADF). Proficiency in front-end frameworks (Angular or React) and responsive UI development. Solid understanding of software design patterns, microservices architecture, and API integration. Knowledge of DevOps practices, CI/CD pipelines, and Git-based version control. Excellent problem-solving, analytical, and communication skills. Microsoft certifications (such as AZ-204, AZ-400, or DP-900) are a plus Qualifications Bachelor’s degree in information technology or related field is required

Posted 5 days ago

Apply

0.0 - 13.0 years

0 Lacs

Thane, Maharashtra

On-site

Indeed logo

202503219 Thane, Maharashtra, India Bevorzugt Description Summary of Role : We are seeking a Senior Full Stack Developer with 10–13 years of experience working with the Microsoft technology stack and has experience with Python or other tech stack. The ideal candidate should have deep expertise in .NET Core, Python, C#, SQL Server, Azure Cloud Services, Angular, and other Microsoft-based development frameworks. This role involves full-cycle application development, including frontend, backend, database, and cloud integration, to build scalable, high-performance solutions The Role: Full Stack Development Develop, optimize, and maintain applications using .NET Core, C#, , ASP.NET and Azure functions. Design and implement responsive frontend UI using Angular. Build and maintain RESTful APIs for seamless data exchange. Develop solutions in Python, LangChain, LangGraph Work with connectors, AI Builder, and RPA to extend capabilities. Database & Cloud Services Design and manage SQL Server databases, ensuring performance and security Develop cloud-native applications leveraging Azure services such as Azure Functions, App Services, and Azure SQL Implement data storage solutions using Cosmos DB or Dataverse if required. Architecture & Integration Define and implement scalable, secure, and high-performing architecture Integrate applications with Microsoft 365, Power Platform, SharePoint, and other third-party services Optimize backend services for high-availability and low-latency performance Security & Best Practices Ensure secure coding practices, compliance, and role-based access control Implement DevOps, CI/CD pipelines, and automated deployment strategies. Follow Microsoft best practices for application security and performance. Collaboration & Leadership Work closely with business teams, architects, and UI/UX designers to deliver high-quality applications. Mentor junior developers and contribute to code reviews, design discussions, and technical improvements Stay updated with Microsoft technologies, frameworks, and industry trends The Requirements: Bachelor’s degree in information technology or related field is required 10–13 years of experience in full stack development using Microsoft technologies. Strong expertise in .NET Core, C#, ASP.NET MVC/Web API, Angular, and SQL Server. Experience with Azure Cloud Services (Azure Functions, AI Builder, App Services, Azure SQL, ADF). Proficiency in front-end frameworks (Angular or React) and responsive UI development. Solid understanding of software design patterns, microservices architecture, and API integration. Knowledge of DevOps practices, CI/CD pipelines, and Git-based version control. Excellent problem-solving, analytical, and communication skills. Microsoft certifications (such as AZ-204, AZ-400, or DP-900) are a plus Qualifications Bachelor’s degree in information technology or related field is required

Posted 5 days ago

Apply

8.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Location: PAN India Duration: 6 Months Experience Required: 7–8 years Job Summary We are looking for an experienced SSAS Developer with strong expertise in developing both OLAP and Tabular models using SQL Server Analysis Services (SSAS), alongside advanced ETL development skills using tools like SSIS, Informatica, or Azure Data Factory . The ideal candidate will be well-versed in T-SQL , dimensional modeling, and building high-performance, scalable data solutions. Key Responsibilities Design, build, and maintain SSAS OLAP cubes and Tabular models Create complex DAX and MDX queries for analytical use cases Develop robust ETL workflows and pipelines using SSIS, Informatica, or ADF Collaborate with cross-functional teams to translate business requirements into BI solutions Optimize SSAS models for scalability and performance Implement best practices in data modeling, version control, and deployment automation Support dashboarding and reporting needs via Power BI, Excel, or Tableau Maintain and troubleshoot data quality, performance, and integration issues Must-Have Skills Hands-on experience with SSAS (Tabular & Multidimensional) Proficient in DAX, MDX, and T-SQL Advanced ETL skills using SSIS / Informatica / Azure Data Factory Knowledge of dimensional modeling (star & snowflake schema) Experience with Azure SQL / MS SQL Server Familiarity with Git and CI/CD pipelines Nice to Have Exposure to cloud data platforms (Azure Synapse, Snowflake, AWS Redshift) Working knowledge of Power BI or similar BI tools Understanding of Agile/Scrum methodology Bachelor's degree in Computer Science, Information Systems, or equivalent Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Location: PAN India Duration: 6 Months Experience Required: 7–8 years Job Summary We are looking for an experienced SSAS Developer with strong expertise in developing both OLAP and Tabular models using SQL Server Analysis Services (SSAS), alongside advanced ETL development skills using tools like SSIS, Informatica, or Azure Data Factory . The ideal candidate will be well-versed in T-SQL , dimensional modeling, and building high-performance, scalable data solutions. Key Responsibilities Design, build, and maintain SSAS OLAP cubes and Tabular models Create complex DAX and MDX queries for analytical use cases Develop robust ETL workflows and pipelines using SSIS, Informatica, or ADF Collaborate with cross-functional teams to translate business requirements into BI solutions Optimize SSAS models for scalability and performance Implement best practices in data modeling, version control, and deployment automation Support dashboarding and reporting needs via Power BI, Excel, or Tableau Maintain and troubleshoot data quality, performance, and integration issues Must-Have Skills Hands-on experience with SSAS (Tabular & Multidimensional) Proficient in DAX, MDX, and T-SQL Advanced ETL skills using SSIS / Informatica / Azure Data Factory Knowledge of dimensional modeling (star & snowflake schema) Experience with Azure SQL / MS SQL Server Familiarity with Git and CI/CD pipelines Nice to Have Exposure to cloud data platforms (Azure Synapse, Snowflake, AWS Redshift) Working knowledge of Power BI or similar BI tools Understanding of Agile/Scrum methodology Bachelor's degree in Computer Science, Information Systems, or equivalent Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Location: PAN India Duration: 6 Months Experience Required: 7–8 years Job Summary We are looking for an experienced SSAS Developer with strong expertise in developing both OLAP and Tabular models using SQL Server Analysis Services (SSAS), alongside advanced ETL development skills using tools like SSIS, Informatica, or Azure Data Factory . The ideal candidate will be well-versed in T-SQL , dimensional modeling, and building high-performance, scalable data solutions. Key Responsibilities Design, build, and maintain SSAS OLAP cubes and Tabular models Create complex DAX and MDX queries for analytical use cases Develop robust ETL workflows and pipelines using SSIS, Informatica, or ADF Collaborate with cross-functional teams to translate business requirements into BI solutions Optimize SSAS models for scalability and performance Implement best practices in data modeling, version control, and deployment automation Support dashboarding and reporting needs via Power BI, Excel, or Tableau Maintain and troubleshoot data quality, performance, and integration issues Must-Have Skills Hands-on experience with SSAS (Tabular & Multidimensional) Proficient in DAX, MDX, and T-SQL Advanced ETL skills using SSIS / Informatica / Azure Data Factory Knowledge of dimensional modeling (star & snowflake schema) Experience with Azure SQL / MS SQL Server Familiarity with Git and CI/CD pipelines Nice to Have Exposure to cloud data platforms (Azure Synapse, Snowflake, AWS Redshift) Working knowledge of Power BI or similar BI tools Understanding of Agile/Scrum methodology Bachelor's degree in Computer Science, Information Systems, or equivalent Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Location: PAN India Duration: 6 Months Experience Required: 7–8 years Job Summary We are looking for an experienced SSAS Developer with strong expertise in developing both OLAP and Tabular models using SQL Server Analysis Services (SSAS), alongside advanced ETL development skills using tools like SSIS, Informatica, or Azure Data Factory . The ideal candidate will be well-versed in T-SQL , dimensional modeling, and building high-performance, scalable data solutions. Key Responsibilities Design, build, and maintain SSAS OLAP cubes and Tabular models Create complex DAX and MDX queries for analytical use cases Develop robust ETL workflows and pipelines using SSIS, Informatica, or ADF Collaborate with cross-functional teams to translate business requirements into BI solutions Optimize SSAS models for scalability and performance Implement best practices in data modeling, version control, and deployment automation Support dashboarding and reporting needs via Power BI, Excel, or Tableau Maintain and troubleshoot data quality, performance, and integration issues Must-Have Skills Hands-on experience with SSAS (Tabular & Multidimensional) Proficient in DAX, MDX, and T-SQL Advanced ETL skills using SSIS / Informatica / Azure Data Factory Knowledge of dimensional modeling (star & snowflake schema) Experience with Azure SQL / MS SQL Server Familiarity with Git and CI/CD pipelines Nice to Have Exposure to cloud data platforms (Azure Synapse, Snowflake, AWS Redshift) Working knowledge of Power BI or similar BI tools Understanding of Agile/Scrum methodology Bachelor's degree in Computer Science, Information Systems, or equivalent Show more Show less

Posted 5 days ago

Apply

8.0 years

0 Lacs

Greater Kolkata Area

On-site

Linkedin logo

Location: PAN India Duration: 6 Months Experience Required: 7–8 years Job Summary We are looking for an experienced SSAS Developer with strong expertise in developing both OLAP and Tabular models using SQL Server Analysis Services (SSAS), alongside advanced ETL development skills using tools like SSIS, Informatica, or Azure Data Factory . The ideal candidate will be well-versed in T-SQL , dimensional modeling, and building high-performance, scalable data solutions. Key Responsibilities Design, build, and maintain SSAS OLAP cubes and Tabular models Create complex DAX and MDX queries for analytical use cases Develop robust ETL workflows and pipelines using SSIS, Informatica, or ADF Collaborate with cross-functional teams to translate business requirements into BI solutions Optimize SSAS models for scalability and performance Implement best practices in data modeling, version control, and deployment automation Support dashboarding and reporting needs via Power BI, Excel, or Tableau Maintain and troubleshoot data quality, performance, and integration issues Must-Have Skills Hands-on experience with SSAS (Tabular & Multidimensional) Proficient in DAX, MDX, and T-SQL Advanced ETL skills using SSIS / Informatica / Azure Data Factory Knowledge of dimensional modeling (star & snowflake schema) Experience with Azure SQL / MS SQL Server Familiarity with Git and CI/CD pipelines Nice to Have Exposure to cloud data platforms (Azure Synapse, Snowflake, AWS Redshift) Working knowledge of Power BI or similar BI tools Understanding of Agile/Scrum methodology Bachelor's degree in Computer Science, Information Systems, or equivalent Show more Show less

Posted 5 days ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies