Jobs
Interviews

45 Ms Fabric Jobs

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

0.0 years

0 Lacs

bengaluru, karnataka, india

On-site

Ready to build the future with AI At Genpact, we don't just keep up with technology-we set the pace. AI and digital innovation are redefining industries, and we're leading the charge. Genpact's AI Gigafactory, our industry-first accelerator, is an example of how we're scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what's possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Manager - Enterprise Data Office In this role you will be responsible to Lead the design, development, and implementation of complex data solutions using Azure services like Azure Data Factory, MS Fabric, Datastage , Azure, and others Responsibilities . Strong skills in big data technologies and languages such as Python, Spark, & Pyspark . . Proficient in SQL and database technologies, with hands-on experience in data modeling and database design. . Architect and build scalable, secure, and performant data pipelines for data ingestion, transformation, and delivery. . Ensure data quality and consistency throughout the data lifecycle. . Implement data governance practices and ensure compliance with data security and privacy regulations. . Oversee the design and development of Data models, ETL processes, and data pipelines to support BI and analytics requirements. . Partner with IT and other cross-functional teams to ensure the successful integration and deployment of BI solutions across the organization. . Serve as a subject matter expert on BI-related topics, providing guidance and support to internal stakeholders as needed. . Design and develop Datastage ETL workflows and datasets in any ETL tool to be used by the BI Reporting tools like Power BI, Tableau, etc. Qualifications/Skillset Minimum qualifications: . Candidates from all the branches of M.Tech / B.E /B/Tech/Graduation are eligible. . Candidate must be Indian Citizens . Deep expertise in Modern Data Management (Enterprise Information Management, Data Warehousing, Data Lakes, Lakehouses , Cloud Data Platforms, Data Modelling, ETL Design, Databases etc ). . Prior experience in databases - DB2, ETL tools - Datastage and Azure cloud environment with Azure SQLDB2 is must. . Knowledge and experience in Reporting tools - Power BI, Tableau etc. . Excellent written and oral communication skills and ability to express complex technical concepts effectively. . Ability to work in a team-based environment, as well as the ability to work independently. . Ability to work effectively with people of many different disciplines with varying degrees of technical experience. Why join Genpact . Lead AI-first transformation - Build and scale AI solutions that redefine industries . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career-Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills . Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace . Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything we build . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let's build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.

Posted 5 days ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

pune, bengaluru, mumbai (all areas)

Hybrid

MS Fabric, Data Engineering (PySpark), DBX

Posted 5 days ago

Apply

5.0 - 9.0 years

0 Lacs

haryana

On-site

At Capgemini Invent, we believe that difference drives change. As inventive transformation consultants, we blend our strategic, creative, and scientific capabilities to collaborate closely with clients in delivering cutting-edge solutions tailored to address the challenges of today and tomorrow. Our approach is informed and validated by science and data, superpowered by creativity and design, and all underpinned by purpose-driven technology. Your role will involve proficiency in various technologies such as MS Fabric, Azure Data Factory, Azure Synapse Analytics, Azure Databricks Lakehouses, OneLake, Data Pipelines, Real-Time Analytics, Power BI Integration, and Semantic Model. You will be responsible for integrating Fabric capabilities to ensure seamless data flow, governance, and collaboration across teams. A strong understanding of Delta Lake, Parquet, and distributed data systems is essential. Additionally, strong programming skills in Python, PySpark, Scala or Spark SQL/TSQL for data transformations are required. In terms of your profile, we are looking for individuals with strong experience in the implementation and management of Lake House using Databricks and Azure Tech stack (ADLS Gen2, ADF, Azure SQL). Proficiency in data integration techniques, ETL processes, and data pipeline architectures is crucial. An understanding of Machine Learning Algorithms & AI/ML frameworks (such as TensorFlow, PyTorch) and Power BI will be an added advantage. MS Fabric and PySpark proficiency are must-have skills for this role. Working with us, you will appreciate the significance of flexible work arrangements that support remote work or flexible work hours, allowing you to maintain a healthy work-life balance. Our commitment to your career growth is at the heart of our mission, offering an array of career growth programs and diverse professions to help you explore a world of opportunities. You will have the opportunity to equip yourself with valuable certifications in the latest technologies like Generative AI. Capgemini is a global business and technology transformation partner that accelerates organizations" dual transition to a digital and sustainable world, creating tangible impact for enterprises and society. With a diverse team of over 340,000 members in more than 50 countries, Capgemini's strong over 55-year heritage is built on trust from clients to unlock technology's value in addressing their entire business needs. The company delivers end-to-end services and solutions spanning strategy, design, and engineering, driven by market-leading capabilities in AI, generative AI, cloud, and data, complemented by deep industry expertise and a strong partner ecosystem.,

Posted 6 days ago

Apply

4.0 - 6.0 years

0 Lacs

bengaluru, karnataka, india

On-site

About the Role We are looking for a versatile and drivenEngineerwith a strong foundation inData Engineeringand a growing focus onAI Engineering. This role is pivotal in designing and delivering robust data pipelines and platforms that power operational processes and advanced analytics, while also contributing to the development and deployment of AI-driven solutions. You will work closely with cross-functional teams to build scalable data infrastructure, ensure data quality, and support AI initiatives that drive business value. Key Responsibilities Design, develop, and maintain scalable data pipelines and ETL processes. Build and optimize data storage solutions such as data lakes and data warehouses. Integrate data from diverse sources including APIs, databases, and third-party providers. Ensure data quality, consistency, and governance across systems. Develop and maintain data models to support analytical and operational use cases. Collaborate with business analysts and stakeholders to translate requirements into technical specifications. Monitor and enhance the performance of data systems and resolve bottlenecks. Document data engineering processes and promote best practices. Support the development and deployment of Gen AI applications. Assist in designing AI solutions from PoC into production environments. Contribute to model validation, monitoring, and performance tuning. Stay current with emerging AI technologies and tools. About the team: The role is assigned to the Data Engineering & Analytics Product Area (Area III) which is the data engineering and analytics backbone for business teams spanning HR, Legal & Compliance, Procurement, Communications, Branding, Marketing & Corporate Real Estate About you: As a successful candidate for this role, you possess the below traits. Must Have: Experience with data modeling and schema design Bachelor's or Master's degree in Computer Science, Engineering, Mathematics, Statistics or a related field. 4-6 years of experience in data engineering, with exposure to AI/ML workflows. And end-to-end pipelines from sourcing to reporting. Knowledge of data quality best practices with experience in reconciliation testing. Proficient in Python, PySpark, cloud platforms such as Palantir Foundry or Azure Databricks / Azure Synapse / MS Fabric. Experience with data pipeline tools (e.g., Airflow, Azure Data Factory) and data lake architectures. Knowledge of integration technologies like REST/SOAP APIs and event-driven architectures. Strong problem-solving skills and a commitment to delivering high-quality solutions. Excellent communication skills and ability to work with both technical and non-technical stakeholders. A desire to continuously upskill & stay updated with emerging technologies Good to have: Certified in Palantir Foundry / experience with Palantir AIP. In depth knowledge of LLM's, AI Agents, RAG Architectures and Agentic Flows. Experience in building chatbots/applications using LLM's, AI Agents, RAG Architectures and Agentic Flows. Familiarity with machine learning frameworks (e.g., Scikit-learn, TensorFlow, PyTorch). Proficiency in SQL and experience with relational databases (e.g. Oracle, Azure SQL). About Swiss Re Swiss Re is one of the world's leading providers of reinsurance, insurance and other forms of insurance-based risk transfer, working to make the world more resilient. We anticipate and manage a wide variety of risks, from natural catastrophes and climate change to cybercrime. We cover both Property & Casualty and Life & Health. Combining experience with creative thinking and cutting-edge expertise, we create new opportunities and solutions for our clients. This is possible thanks to the collaboration of more than 14,000 employees across the world. Our success depends on our ability to build an inclusive culture encouraging fresh perspectives and innovative thinking. We embrace a workplace where everyone has equal opportunities to thrive and develop professionally regardless of their age, gender, race, ethnicity, gender identity and/or expression, sexual orientation, physical or mental ability, skillset, thought or other characteristics. In our inclusive and flexible environment everyone can bring their authentic selves to work and their passion for sustainability. If you are an experienced professional returning to the workforce after a career break, we encourage you to apply for open positions that match your skills and experience. Keywords: Reference Code: 134837

Posted 6 days ago

Apply

8.0 - 12.0 years

12 - 14 Lacs

remote, india

On-site

Role Overview: We are looking for an experienced Data Architect to design, develop, and optimize enterprise data solutions on Microsoft Azure. Key Responsibilities: Lead architecture, design, and implementation of Data Warehouse and Data Lake solutions. Work across the Microsoft Azure stack: Azure Data Factory, Synapse, SQL Database, Key Vault, MS Fabric, DevOps, VNets. Design data models aligned with Medallion Architecture (Bronze, Silver, Gold layers). Create and manage star/snowflake schemas, partitioning strategies, and scalable data flows. Develop optimized SQL (packages, procedures, triggers, transformations, performance tuning). Collaborate with BI teams using Power BI / Tableau for reporting solutions. Oversee environment setup, deployment strategies, and capacity planning. Ensure optimal use of cloud resources to balance performance and cost. Manage stakeholders and communicate effectively across teams. Required Skills & Experience: 8+ years in Data Warehouse / Data Lake programming. At least 2 end-to-end project implementations (1 on Azure mandatory). Strong technical expertise in Azure ecosystem and SQL. Preferred: Azure Solutions Architect Expert or Azure Data Engineer Associate certification.

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

hyderabad, telangana

On-site

Optum is a global organization dedicated to delivering care and improving healthcare outcomes for millions of individuals worldwide. As a valued member of our team, you will have the opportunity to make a meaningful impact by connecting people with essential care, pharmacy benefits, data, and resources to support their well-being. Our culture is rooted in diversity, inclusion, and collaboration, offering talented peers, comprehensive benefits, and avenues for career development. Join us in our mission to advance health equity on a global scale through caring, connecting, and growing together. This role will involve reporting to the head of the Actuarial Data Warehouse Business Intelligence, focusing on a cloud-based data and analytics environment. Within this environment, client data is processed to support advanced actuarial and data science analytics using tools such as Databricks, DBT, MS Fabrics, and Power BI. Your responsibilities will include designing, developing, implementing, testing, deploying, monitoring, and maintaining data enrichments and reporting models to facilitate actuarial reporting and analytics. You will collaborate with the BI team to build and deploy healthcare data enrichments, create high-performance reporting models using DBT for Power BI deployment, design Azure Databricks jobs using Python and Spark, establish and maintain CI/CD processes using tools like Jenkins, GitHub, and Maven, provide support for monthly and quarterly production activities, document data definitions and processes for data governance and security, address non-standard requests and issues, maintain the self-service BI warehouse, collaborate with business owners to incorporate new enrichments and develop reporting models, and ensure compliance with company policies, procedures, and directives. Required qualifications for this role include an undergraduate degree or equivalent experience, 7 years of overall experience in Data and Analytics engineering, 5 years of experience coding for Big Data solutions using Spark and Python, familiarity with Azure, Databricks, DBT, medical and RX claims, MS Fabric, Power BI, CICD tools, and experience in designing and deploying reporting data models. Proficiency in Python, SQL, and excellent communication skills are essential. Preferred qualifications include an undergraduate degree in a STEM field, Snowflake experience, Power BI development experience, previous work as an actuary, and knowledge of healthcare concepts like benefits, pricing, underwriting, and reserves. Location: Hyderabad, Telangana, IN If you are passionate about leveraging your expertise in data and analytics to drive positive change in healthcare, we encourage you to apply and be part of our dynamic team.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Senior Software Engineer at Carelon Global Solutions India, you will be a key member of our team dedicated to improving data-driven decision-making capabilities and optimizing business processes through data analytics. Your primary responsibility will involve designing, implementing, and maintaining data analytics solutions using Power BI. Your tasks will include designing and developing Power BI reports and dashboards to meet the needs of business stakeholders, gathering and understanding business requirements for data visualization and analysis, collaborating with data engineers and analysts to acquire, clean, and transform data, creating complex DAX calculations and measures, ensuring data security and compliance, troubleshooting and resolving issues in Power BI reports, providing training and support to end users, and staying updated on the latest Power BI features and trends. To excel in this role, you should hold a Bachelor's or higher degree in Computer Science, Data Science, IT, or a related field, or possess equivalent experience. You should have at least 5 years of experience in Power BI analytics solution development, proficiency in Power BI development, a strong understanding of data modeling and visualization concepts, experience with SQL and Data Analysis Expressions (DAX), proficiency in Python for software solutions, exposure to cloud technologies like AWS and Azure, experience with ETL tools, and familiarity with data science tools like R and Python. Additionally, you should demonstrate excellent problem-solving abilities, solid communication and collaboration skills, the capacity to work independently and with onshore teams, a strong understanding of working in production environments, experience with data warehouse systems and large datasets, and strong analytical skills. This position also requires availability during the 2-11 PM (IST) time frame and approximately 25% of your time will be spent in customer-facing virtual meetings. At Carelon, we offer a world of limitless opportunities to our associates, fostering an environment that promotes growth, well-being, purpose, and a sense of belonging. Our focus on learning and development, innovative culture, holistic well-being, comprehensive rewards and recognitions, competitive health and medical insurance coverage, best-in-class amenities and workspaces, and policies designed with associates at the center make Carelon a great place to work. We are an equal opportunity employer committed to diversity and inclusion. If you require reasonable accommodation such as an interpreter or a different interview format due to a disability, please request the Reasonable Accommodation Request Form. Join us at Carelon Global Solutions India and be part of a team that values limitless minds and limitless opportunities.,

Posted 1 week ago

Apply

8.0 - 13.0 years

10 - 20 Lacs

chennai

Remote

Greetings from IT Resonance Inc We're always looking to expand our team of talented professionals IT Resonance Inc and currently seeking qualified candidates that would make a good fit for Microsoft Fabric & Power BI Developer. #Position : Microsoft Fabric & Power BI Developer Work Timing: 4:30 PM IST to 12:30 AM IST Work location: Remote Job Type: Freelance/Contract Experience: 8+ Years Responsibilities : 1. Develop and manage data pipelines and Lakehouse/Warehouse models in Microsoft Fabric. 2. Build interactive Power BI dashboards and reports based on business needs. 3. Design and implement data models and semantic layers for reporting. 4. Solid understanding of Business Intelligence concepts, with hands-on experience in Power BI for data visualization and reporting. 5. Understanding of data warehousing concepts, architectures and models. 6. Familiarity with cloud computing platforms (e.g., Azure/ AWS/ Google Cloud) and services related to data storage and processing. 7. Use Git and deployment pipelines for version control and release management. 8. Monitor and maintain Power BI Service reports, datasets, and workspaces. Ensure data quality, performance tuning, and efficient refresh schedules. Interested candidates can share profiles to swetha@itresonance.com / +91 8925526510

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

Are you prepared to advance your potential and contribute significantly within the fields of life sciences, diagnostics, and biotechnology Join Cytiva, a part of Danaher's group of operating companies, where our collective efforts save lives and are driven by a mutual dedication to innovation for genuine impact. Embrace a culture of inclusivity where your individual perspective holds significance. Utilizing Danaher's continuous improvement system, you will play a pivotal role in transforming concepts into significant outcomes, innovating at the pace of life. Working at Cytiva places you at the forefront of delivering innovative solutions to improve human health. Our esteemed customers engage in life-saving endeavors spanning from fundamental biological research to the development of cutting-edge vaccines, new medications, and advanced cell and gene therapies. At Cytiva, you will have the opportunity to enhance both yourself and our collective capabilities, addressing challenges that truly make a difference alongside individuals who prioritize mutual care for one another, our clients, and their patients. Take the next step towards a profoundly transformative career. Discover the Danaher Business System, the driving force behind endless possibilities. The role of HR Digital Analyst involves designing, developing, and implementing AI-powered HR agents that optimize workforce efficiency, streamline operations, and elevate the associate experience. Reporting to the Director of HR Operations, this position is based in Bangalore, India, and requires on-site presence. Key Responsibilities: - Enhance Productivity through the implementation of AI agents that analyze work patterns, propose workflow enhancements, and minimize inefficiencies. - Develop Autonomous Workforce Assistants that offer proactive insights, recommendations, and assistance to associates. - Utilize Predictive HR Analytics to forecast talent requirements, identify burnout risks, and suggest strategic workforce interventions. - Foster Continuous Learning & Adaptation by refining agents through reinforcement learning, NLP, and feedback loops to ensure alignment with business objectives and company ethos. Qualifications: - Bachelor's degree in Human Resources, Business Administration, or an IT-related field. - Proficiency in reinforcement learning, multi-agent systems, and generative AI for HR applications. - Understanding of HR technologies, workforce optimization, and associate experience platforms. - Competence in Python, R, or other programming languages for AI-driven automation. - Experience in chatbot development, generative AI, and intelligent virtual assistants. - Ability to leverage AI-driven insights to enhance HR operations. Desired Experience: - Familiarity with HR platforms such as Workday, SAP SuccessFactors, or AI-driven ATS. - Background in AI agent development, NLP (Natural Language Processing), and automation frameworks. - Experience with Microsoft tools including Co-pilot, Power Automate, Power Virtual Agents, MS Fabric & Power BI. Join our dynamic team today and together, we will amplify the real-world impact of tomorrow's science and technology. Collaborate with customers worldwide to address their most intricate challenges, crafting solutions that breathe life into the realm of science. For further details, please visit www.danaher.com. At Danaher, diversity is celebrated, recognizing the value of both visible and invisible similarities and differences among our workforce, workplace, and the markets we serve. Our associates, customers, and shareholders bring forth unique perspectives and insights stemming from this rich tapestry of diversity.,

Posted 1 week ago

Apply

2.0 - 4.0 years

0 Lacs

bengaluru, karnataka, india

On-site

About this role: Wells Fargo is seeking a Analytics Consultant. In this role, you will: Consult with business line and enterprise functions on less complex research Use functional knowledge to assist in non-model quantitative tools that support strategic decision making Perform analysis of findings and trends using statistical analysis and document process Present recommendations to increase revenue, reduce expense, maximize operational efficiency, quality, and compliance Identify and define business requirements and translate data and business needs into research and recommendations to improve efficiency Participate in all group technology efforts including design and implementation of database structures, analytics software, storage, and processing Develop customized reports and ad hoc analyses to make recommendations and provide guidance to less experienced staff Understand compliance and risk management requirements for supported area Ensure adherence to data management or data governance regulations and policies Participate in company initiatives or processes to assist in meeting risk and capital objectives and other strategic goals Collaborate and consult with more experienced consultants and with partners in technology and other business groups Required Qualifications: 2+ years of Analytics experience, or equivalent demonstrated through one or a combination of the following: work experience, training, military experience, education Desired Qualifications: Hands-on Proficiency in Business Intelligence (BI) particularly Microsoft Power BI, MS Fabric, Power Automate and Power Platforms. Hands-on Proficiency in any or all of the programming languages used for analytics & data science such as SAS, Python, PySpark, Spark SQL or Scala. Hands-on strong knowledge of SQL and experience with database management systems (e.g., Teradata, PostgreSQL, MySQL, or NoSQL databases). Familiarity with data warehousing and big data technologies (e.g., Hadoop, Spark, Snowflake, Redshift). Experience with ELT/ETL tools and data integration techniques. Experience optimizing code for performance and cost. Comfortable with using code and agile process management tools like GitHub and JIRA. Someone with an exposure towards developing solutions for high volume, low latency applications and can operate in a fast paced, highly collaborative environment. Provide production support for data assets and products as required. Knowledge of data modelling and data warehousing best practices. Understanding of data governance, data quality, and data security principles. Strong problem-solving and communication skills. Ability to work in a collaborative team environment. Knowledge of cloud platforms (e.g., Azure and/OR Google Cloud) is a plus. Job Expectations: Design, develop, and maintain ETL (Extract, Transform, Load) / ETL processes and data pipelines to move and transform data from various sources into a centralized data repository. Design, implement, and optimize data warehouses and data lakes to ensure scalability, performance, and data consistency. Create and manage data models to support business requirements, ensuring data accuracy, integrity, and accessibility. Integrate data from diverse sources, including databases, APIs, third-party services, and streaming data, and ensure data quality and consistency. Cleanse, transform, and enrich raw data to make it suitable for analysis and reporting. Implement and enforce data security measures to protect sensitive information and ensure compliance with data privacy regulations (e.g., GDPR, HIPAA). Independently build, operate, maintain, enhance, publish and sunset BI Products (own end-to-end life cycle) across enterprise stakeholders along with up-to-date maintenance of all required documentation and artefacts such as SOPs, previous versions, secondary quality reviews, etc. in various BI tools such as Tableau, PowerBI, etc. Continuously monitor and optimize data pipelines and databases for improved performance and efficiency. Develop and implement automated testing procedures to validate data quality and pipeline reliability. Maintain thorough documentation of data processes, schemas, and data lineage to support data governance efforts. Collaborate with wider team such as data scientists, analysts, software engineers, and other stakeholders to understand their data requirements and provide data solutions that meet their needs. Utilize version control systems to manage code and configurations related to data pipelines. Diagnose and resolve data-related issues and provide technical support as needed. Working Hours: 1:30PM-10:30PM India Time Posting End Date: 11 Sep 2025 We Value Equal Opportunity Wells Fargo is an equal opportunity employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other legally protected characteristic. Employees support our focus on building strong customer relationships balanced with a strong risk mitigating and compliance-driven culture which firmly establishes those disciplines as critical to the success of our customers and company. They are accountable for execution of all applicable risk programs (Credit, Market, Financial Crimes, Operational, Regulatory Compliance), which includes effectively following and adhering to applicable Wells Fargo policies and procedures, appropriately fulfilling risk and compliance obligations, timely and effective escalation and remediation of issues, and making sound risk decisions. There is emphasis on proactive monitoring, governance, risk identification and escalation, as well as making sound risk decisions commensurate with the business unit's risk appetite and all risk and compliance program requirements. Candidates applying to job openings posted in Canada: Applications for employment are encouraged from all qualified candidates, including women, persons with disabilities, aboriginal peoples and visible minorities. Accommodation for applicants with disabilities is available upon request in connection with the recruitment process. Applicants with Disabilities To request a medical accommodation during the application or interview process, visit . Drug and Alcohol Policy Wells Fargo maintains a drug free workplace. Please see our to learn more. Wells Fargo Recruitment and Hiring Requirements: a. Third-Party recordings are prohibited unless authorized by Wells Fargo. b. Wells Fargo requires you to directly represent your own experiences during the recruiting and hiring process.

Posted 1 week ago

Apply

7.0 - 15.0 years

25 - 35 Lacs

chennai, tamil nadu, india

On-site

Hi, Greetings from Tasya Infra IT Solutions Pvt.Ltd. We are looking for Senior Data Engineers fpr Chennai. Senior Data Engineer: (MS Fabric, Power BI, ADF, API, SAP and HR Domain): Only Chennai-5 days work from Office Responsibilities: Lead the design and delivery of advanced analytics solutions, from raw data to insight-driven stories. Collaborate with stakeholders to define KPIs, performance measures, and strategic questions. Own the development of predictive models and what-if scenarios that drive business foresight. Design and optimize robust data models, pipelines, and ETL processes using Power BI, SQL, and Azure tools. Guide and mentor junior analysts and developers, setting high standards for analytical reasoning and impact. Present insights confidently to senior leaders in clear, actionable, and visually compelling ways. Collaborate with cross-functional teams to connect analytics with broader digitalization strategies. Requirement: 7+ years of hands-on experience in analytics, BI, or data engineering roles with proven business value delivery. Mastery of Power BI (DAX, data modeling, storytelling) and strong SQL skills. Deep understanding of data pipelines, ETL frameworks, and cloud platforms (Azure preferred). Strong predictive analytics mindset comfort with trend analysis, forecasting, or even basic machine learning techniques. Ability to distill complexity into insights excellent storytelling and data visualization skills. Demonstrated self-leadership, strong work ethic, and ownership mentality. Excellent communication and interpersonal skills able to translate data into business action. Experience with Microsoft Fabric, Azure Data Factory, Databricks, or similar tools. Previous success in HR analytics, financial analytics, or operational intelligence. Exposure to shared services, outsourcing, or people-intensive businesses. Certifications: PL-300, DP-900, AZ-900, or similar. (Good to have)

Posted 2 weeks ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Data Engineer at MS Fabric, your primary responsibility will be designing, implementing, and managing scalable data pipelines using the MS Fabric Azure Tech stack, including ADLS Gen2, ADF, and Azure SQL. You should have a strong background in data integration techniques, ETL processes, and data pipeline architectures. Additionally, you will need to be well-versed in data quality rules, principles, and implementation. Your key result areas and activities will include: 1. Data Pipeline Development & Optimization: - Design and implement data pipelines using MS Fabric. - Manage and optimize ETL processes for data extraction, transformation, and loading. - Conduct performance tuning for data storage and retrieval to enhance efficiency. 2. Data Quality, Governance & Documentation: - Ensure data quality and integrity across all data processes. - Assist in designing data governance frameworks and policies. - Generate and maintain documentation for data architecture and data flows. 3. Cross-Functional Collaboration & Requirement Gathering: - Collaborate with cross-functional teams to gather and define data requirements. - Translate functional and non-functional requirements into system specifications. 4. Technical Leadership & Support: - Provide technical guidance and support to junior data engineers. - Participate in code reviews and ensure adherence to coding standards. - Troubleshoot data-related issues and implement effective solutions. In terms of technical experience, you must be proficient in MS Fabric, Azure Data Factory, and Azure Synapse Analytics. You should have a deep knowledge of Fabric components like writing Notebook, Lakehouses, OneLake, Data Pipelines, and Real-Time Analytics. Additionally, you should be skilled in integrating Fabric capabilities for seamless data flow, governance, and cross-team collaboration. Strong grasp of Delta Lake, Parquet, distributed data systems, and various data formats (JSON, XML, CSV, Parquet) is essential. Experience in ETL/ELT processes, data warehousing, data modeling, and data quality frameworks is required. Proficiency in Python, PySpark, Scala, Spark SQL, and T-SQL for complex data transformations is a must-have. It would be beneficial if you have familiarity with Azure cloud platforms and cloud data services, MS Purview, and open-source libraries like Dequee, Pydequee, Great Expectation for DQ implementation. Additionally, experience with developing data models to support business intelligence and analytics, PowerBI dashboard, and Databricks is a plus. To qualify for this position, you should hold a Bachelor's or Master's degree in Computer Science, Engineering, or a related field, along with at least 5 years of experience in MS Fabric/ADF/Synapse. You should also have experience with or knowledge of Agile Software Development methodologies and be able to consult, write, and present persuasively.,

Posted 2 weeks ago

Apply

9.0 - 13.0 years

0 Lacs

haryana

On-site

As a Data Engineer with expertise in Power BI Development, you will be responsible for developing and maintaining scalable data pipelines and ETL processes to facilitate data integration and analytics. Your role will involve designing interactive Power BI dashboards and reports to visualize complex datasets, in addition to collaborating with stakeholders to understand requirements and translate them into technical solutions. You will work with cloud platforms such as AWS, Azure, and Google Cloud to manage and process large-scale data effectively. Furthermore, you will play a crucial role in implementing and optimizing data storage solutions using tools like Snowflake, Databricks, Dremio, and MS Fabric. Your proficiency in writing efficient SQL queries for data extraction, transformation, and analysis will be essential. Additionally, your utilization of Python for data manipulation, automation, and machine learning tasks will contribute significantly to the success of data projects. Ensuring data quality, security, and compliance with organizational standards will be paramount in your responsibilities. It will also be imperative for you to stay abreast of the latest trends in data engineering, machine learning, and AI technologies to enhance your contributions to the organization. The ideal candidate for this role will possess proficiency in Power BI report development, including DAX and Power Query, as well as a strong foundation in SQL and Python programming. Hands-on experience with Neo4j and Graph Database, along with familiarity with cloud platforms and data warehousing tools, will be advantageous. An understanding of machine learning and AI concepts, coupled with excellent problem-solving and communication skills, will be beneficial in fulfilling the requirements of this position. Preferred qualifications include certifications in Power BI, cloud platforms, or data engineering, as well as experience with big data tools and frameworks. Your ability to work both independently and collaboratively as part of a team will be essential in this full-time, permanent role with a hybrid work mode. If you meet these qualifications and are excited about contributing to data engineering projects, we encourage you to share your resume with devanshi.kesarwani@nexionpro.com. Thank you for considering this opportunity. Regards, Devanshi Kesarwani,

Posted 2 weeks ago

Apply

10.0 - 14.0 years

0 Lacs

hyderabad, telangana

On-site

As a Solution Architect with 10 to 14 years of experience, you will collaborate with sales, presales, and COE teams to provide technical expertise and support throughout the new business acquisition process. Your role is crucial in understanding customer requirements, presenting solutions, and demonstrating product value. You excel in high-pressure environments, maintaining a positive outlook and recognizing that career growth requires strategic choices. Possessing strong communication skills, both written and verbal, allows you to convey complex technical concepts clearly. Being a team player, customer-focused, self-motivated, and responsible individual, you can work under pressure with a positive attitude. Experience in managing RFPs/ RFIs, client demos, presentations, and converting opportunities into winning bids is essential. Your work ethic, positive attitude, and enthusiasm for new challenges, along with multitasking and prioritizing abilities, are key. You can work independently with minimal supervision, demonstrating a process-oriented and quality-first approach. Your performance as a Solution Architect will be measured by your ability to convert clients" business challenges into winning proposals through excellent technical solutions. In this role, you will: - Develop high-level architecture designs for scalable, secure, and robust solutions. - Select appropriate technologies, frameworks, and platforms for business needs. - Design cloud-native, hybrid, or on-premises solutions using AWS, Azure, or GCP. - Ensure seamless integration between enterprise applications, APIs, and third-party services. - Design and develop scalable, secure, and performant data architectures on cloud platforms. - Translate business needs into technical solutions by designing secure, scalable, and performant data architectures. - Recommend and implement data models, data services, and data governance practices. - Design and implement data pipelines for efficient data extraction, transformation, and loading processes. Requirements: - 10+ years of experience in data analytics and AI technologies. - Certifications in data engineering, analytics, cloud, or AI are advantageous. - Bachelor's in engineering/technology or an MCA from a reputed college is required. - Prior experience as a solution architect during the presales cycle is beneficial. Location: Hyderabad, Ahmedabad, Indore Experience: 10 to 14 years Joining Time: Maximum 30 days Work Schedule: All Days, Work from Office,

Posted 2 weeks ago

Apply

3.0 - 7.0 years

0 Lacs

pune, maharashtra

On-site

The role of a Data Integration Developer at R Systems in Pune, Maharashtra, India requires a talented individual with 3 to 5 years of experience in ETL development and data integration. As a Data Integration Developer, you will be instrumental in designing and developing ETL processes using Azure Data Factory to facilitate seamless data integration and transformation across platforms. Your responsibilities will include collaborating with data engineers and analysts to create data pipelines and workflows, implementing data visualization solutions using tools like Power BI and Tableau, and utilizing Python and PySpark for data processing tasks. You will also be tasked with troubleshooting data integration issues, documenting ETL processes, and staying abreast of the latest trends in data integration and visualization technologies. Your primary responsibilities as a Data Integration Developer will revolve around ETL development, data management, utilizing Azure cloud services, data visualization, programming, collaboration and communication, documentation, and quality assurance. You will be expected to design, develop, and optimize scalable ETL processes, work with T-SQL for data manipulation, leverage Azure cloud services for data integration solutions, create visually compelling dashboards using tools like Power BI and Tableau, write Python and PySpark scripts, collaborate with cross-functional teams, document ETL processes and best practices, and ensure data accuracy and reliability through testing and validation. To qualify for this role, you should hold a Bachelor's degree in Computer Science, Information Technology, or a related field, along with 3 to 5 years of relevant experience in data integration and ETL development. Proficiency in ETL tools, data visualization platforms, strong analytical and problem-solving skills, and attention to detail are essential qualifications for this position. If you are passionate about data integration and analytics, and seeking to work with cutting-edge technologies in a collaborative and innovative environment, we invite you to apply for the Data Integration Developer position at R Systems. This role offers competitive salary and benefits, opportunities for professional development, and a chance to contribute to the future of data-driven decision-making. Interested candidates can apply by sending their resumes to [insert email address] with the subject line "Data Integration Developer Application." We look forward to receiving your application and potentially welcoming you to our dynamic team at R Systems.,

Posted 3 weeks ago

Apply

3.0 - 7.0 years

0 - 3 Lacs

jaipur

Work from Office

Thanks & Regards Sulabh Tailang HR-Talent Acquisition Manager |Celebal Technologies |91-9448844746 Sulabh.tailang@celebaltech.com|LinkedIn-sulabhtailang |Twitter-Ersulabh Website-www.celebaltech.com Job Role Data Engineer Job Location Jaipur Job Type Permanent Experience Required- (2-4) Years As a Data Engineer, you will play a critical role in designing, developing, and maintaining our data pipelines and infrastructure. You will work closely with our data scientists, analysts, and other stakeholders to ensure data is accurate, timely, and accessible. Your contributions will directly impact our data-driven decision-making and support our growth. Key Responsibilities: Data Pipeline Development: Design, develop, and implement data pipelines using Azure Data Factory and Databricks to support the ingestion, transformation, and movement of data. ETL Processes: Develop and optimize ETL (Extract, Transform, Load) processes to ensure efficient data flow and transformation. Data Lake Management: Develop and maintain Azure Data Lake solutions, ensuring efficient storage and retrieval of large datasets. Data Warehousing: Work with Azure Synapse Analytics to build and manage scalable data warehousing solutions that enable advanced analytics and reporting. Data Integration: Integrate various data sources into MS-Fabric, ensuring data consistency, quality, and accessibility across different platforms. Performance Optimization: Optimize data processing workflows and storage solutions to improve performance and reduce costs. Database Management: Manage and optimize databases (SQL and NoSQL) to support high-performance queries and data storage requirements. Data Quality: Implement data quality checks and monitoring to ensure accuracy and consistency of data. Collaboration: Work closely with data scientists, analysts, and other stakeholders to understand data requirements and deliver actionable insights. Documentation: Create and maintain comprehensive documentation for data processes, pipelines, and infrastructure, architecture and best practices. Troubleshooting and Support: Identify and resolve issues in data pipelines, data lakes, and warehousing solutions, providing timely support and maintenance. Qualifications: Experience: 2-4 years of experience in data engineering or a related field. Technical Skills: Proficiency with Azure Data Factory, Azure Synapse Analytics, Databricks, and Azure Data Lake Experience with Microsoft Fabric is a plus Strong SQL skills and experience with data warehousing concepts (DWH) Knowledge of data modeling, ETL processes, and data integration Experience with relational databases (e.g., MS-SQL, PostgreSQL, MySQL) Hands-on experience with ETL tools and frameworks (e.g., Apache Airflow, Talend) Knowledge of big data technologies (e.g., Hadoop, Spark) is a plus Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and associated data services (e.g., S3, Redshift, BigQuery) Familiarity with data visualization tools (e.g., Power BI) and experience with programming languages such as Python, Java, or Scala. Experience with schema design and dimensional data modeling Analytical Skills: Strong problem-solving abilities and attention to detail. Communication: Excellent verbal and written communication skills, with the ability to explain technical concepts to non-technical stakeholders. Education: Bachelors degree in computer science, Engineering, Mathematics, or a related field. Advanced degrees or certifications are a plus. Thanks & Regards Sulabh Tailang HR-Talent Acquisition Manager |Celebal Technologies |91-9448844746 Sulabh.tailang@celebaltech.com|LinkedIn-sulabhtailang |Twitter-Ersulabh Website-www.celebaltech.com

Posted 3 weeks ago

Apply

5.0 - 7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Ventra is a leading business solutions provider for facility-based physicians practicing anesthesia, emergency medicine, hospital medicine, pathology, and radiology. Focused on Revenue Cycle Management, Ventra partners with private practices, hospitals, health systems, and ambulatory surgery centers to deliver transparent and data-driven solutions that solve the most complex revenue and reimbursement issues, enabling clinicians to focus on providing outstanding care to their patients and communities. Come Join Our Team! As part of our robust Rewards & Recognition program, this role is eligible for our Ventra performance-based incentive plan, because we believe great work deserves great rewards. Help Us Grow Our Dream Team Join Us, Refer a Friend, and Earn a Referral Bonus! Job Summary The Senior Quality Assurance Engineer will bring comprehensive quality testing expertise to a growing and innovative organization, designing and documenting testing scenarios, creating test plans, and reviewing quality specifications and technical design for both existing and new analytics products. The Sr. QA Engineer will be an integral part of our growing analytics product team, working with new technology in both manual and automation testing environments. The Sr. Quality Assurance Engineer will design testing procedures to ensure our analytics meets established quality standards using best practices and industry standard practices. Develops and writes testing scripts to ensure our analytics perform as expected while monitoring and documenting testing results according to best practice procedures. Essential Functions And Tasks Perform test execution (both manual and automated) for healthcare analytics including extraction and load processes, data transformations, data models, and dashboarding. Create detailed, comprehensive, and well-structured test plans and test cases. Collaborate closely with Data & Analytics team members to ensure that production system defects are documented, an appropriate testing plan is established, and defects are resolved in a timely manner. Drive data quality programs and assist in the implementation of company automated test frameworks and solutions within an agile team structure. Performs special projects and other duties as assigned. Education And Experience Requirements Bachelors degree in computer science, Information Technology, Data Science, Math, Finance, or a related field, or equivalent training and/or experience. Minimum?(5) years of?experience as a quality assurance engineer or data analyst with strong data quality orientation. E Experience with testing in cloud-native systems (MS Fabric preferred) Preferred Qualifications QA related certifications preferred. Strong understanding of US healthcare revenue cycle and billing. Knowledge, Skills, And Abilities Proficiency with using a variety of test case management tools in Azure DevOps and Agile development tools and process (Azure Dev Ops and Confluence). Proven QA experience designing quality assurance testing for ELT process, dashboard tools (ideally Power BI), large scale data warehouse projects. Knowledge of data quality frameworks, to monitor and enforce data quality standards. Experience with automated testing tools. Proven experience building test plans based on business requirements and technical specifications The ability to test the performance and scalability of data systems, especially when handling large volumes of data. This includes checking for speed, reliability, and system bottlenecks in data processing and analytics. Expert SQL in relational databases (SQL Server, MS Fabric) with the ability to independently explore, query, and validate data. Ability to read and understand existing queries as well as create new queries. Strong analytical skills. Strong process improvement & organizational skills. Strong time management skills. Working knowledge of project management specifically Azure DevOps. Ability to identify opportunities that drive execution of action plans to close gaps and move key priorities forward. Ability to influence and gain support from stakeholders through effective communication and relationship building. Ability to communicate technical information to technical and nontechnical personnel at various levels in and across the organization. Ability to exercise sound judgment and handle highly sensitive and confidential information appropriately. Ability to remain results oriented and work within a collaborative and dynamic high paced environment. Compensation Base Compensation will be based on various factors unique to each candidate including geographic location, skill set, experience, qualifications, and other job-related reasons . This position is also eligible for a discretionary incentiv e bon us in accordance with company policies . Ventra Health Equal Employment Opportunity (Applicable only in the US) Ventra Health is an equal opportunity employer committed to fostering a culturally diverse organization. We strive for inclusiveness and a workplace where mutual respect is paramount. We encourage applications from a diverse pool of candidates, and all qualified applicants will receive consideration for employment without regard to race, color, ethnicity, religion, sex, age, national origin, disability, sexual orientation, gender identity and expression, or veteran status. We will provide reasonable accommodations to qualified individuals with disabilities, as needed, to assist them in performing essential job functions. Recruitment Agencies Ventra Health does not accept unsolicited agency resumes. Ventra Health is not responsible for any fees related to unsolicited resumes. Solicitation of Payment Ventra Health does not solicit payment from our applicants and candidates for consideration or placement. Attention Candidates Please be aware that there have been reports of individuals falsely claiming to represent Ventra Health or one of our affiliated entities Ventra Health Private Limited and Ventra Health Global Services. These scammers may attempt to conduct fake interviews, solicit personal information, and, in some cases, have sent fraudulent offer letters. To protect yourself, verify any communication you receive by contacting us directly through our official channels. If you have any doubts, please contact us at [HIDDEN TEXT] to confirm the legitimacy of the offer and the person who contacted you. All legitimate roles are posted on https://ventrahealth.com/careers/. Show more Show less

Posted 1 month ago

Apply

5.0 - 10.0 years

0 - 0 Lacs

pune, maharashtra

On-site

You will be responsible for architecting data warehousing and business intelligence solutions to address cross-functional business challenges. This will involve interacting with business stakeholders to gather requirements and deliver comprehensive Data Engineering, Data Warehousing, and analytics solutions. Additionally, you will collaborate with other technology teams to extract, transform, and load data from diverse sources. You should have a minimum of 5-8 years of end-to-end Data Engineering Development experience, preferably across industries such as Retail, FMCG, Manufacturing, Finance, Oil & Gas. Experience in functional domains like Sales, Procurement, Cost Control, Business Development, and Finance is desirable. You are expected to have 3 to 10 years of experience in data engineering projects using Azure or AWS services, with hands-on expertise in data transformation, processing, and migration using various tools such as Azure Data Lake Storage, Azure Data Factory, Databricks, AWS Glue, Redshift, and Athena. Familiarity with MS Fabric and its components will be advantageous, along with experience in working with different source/target systems like Oracle Database, SQL Server Database, Azure Data Lake Storage, ERP, CRM, and SCM systems. Proficiency in reading data from sources via APIs/Web Services and utilizing APIs to write data to target systems is essential. You should also have experience in Data Cleanup, Data Cleansing, and optimization tasks, including working with non-structured data sets in Azure. Knowledge of analytics tools like Power BI and Azure Analysis Service, as well as exposure to private and public cloud architectures, will be beneficial. Excellent written and verbal communication skills are crucial for this role. Ideally, you hold a degree in M.Tech / B.E. / B.Tech (Computer Science, Information systems, IT) / MCA / MCS. Key requirements include expertise in MS Azure Data Factory, Python, PySpark Coding, Synapse Analytics, Azure Function Apps, Azure Databricks, AWS Glue, Athena, Redshift, and Databricks Pysark. Exposure to integration with various applications/systems like ERP, CRM, SCM, WebApp using APIs, Cloud, On-premise systems, DBs, and file systems is expected. The role necessitates a minimum of 3 Full Cycle Data Engineering Implementations (5-10 years of experience) with a focus on building data warehouses and implementing data models. Exposure to the consulting industry is mandatory, along with strong verbal and written communication skills. Your primary skills should encompass Data Engineering Development, Cloud Engineering with Azure or AWS, Data Warehousing & BI Solutions Architecture, Programming (Python PySpark), Data Integration across various systems, Consulting experience, ETL and Data Transformation, and knowledge in Cloud Architecture. Additionally, familiarity with MS Fabric, handling non-structured data, Data Cleanup and Optimization, API/Web Services, Data Visualization, and industry and functional knowledge will be advantageous. The compensation package ranges from INR 12-28 lpa, subject to the candidate's performance and experience level.,

Posted 1 month ago

Apply

6.0 - 9.0 years

18 - 25 Lacs

Bengaluru

Hybrid

About the Role We are seeking a BI Architect to advise the BI Lead of a global CPG organization and architect an intelligent, scalable Business Intelligence ecosystem. This includes an enterprise-wide KPI dashboard suite augmented by a GenAI-driven natural language interface for insight discovery. The ideal candidate will be responsible for end-to-end architecture: from scalable data models and dashboards to a conversational interface powered by Retrieval-Augmented Generation (RAG) and/or Knowledge Graphs. The solution must synthesize internal BI data with external (web-scraped and competitor) data to deliver intelligent, context-rich insights. Key Responsibilities • Architect BI Stack : Design and oversee a scalable and performant BI platform that serves as a single source of truth for key business metrics across functions (Sales, Marketing, Supply Chain, Finance, etc.). • Advise BI Lead : Act as a technical thought partner to the BI Lead, aligning architecture decisions with long-term strategy and business priorities. • Design GenAI Layer : Architect a GenAI-powered natural language interface on top of BI dashboards to allow business users to query KPIs, trends, and anomalies conversationally. • RAG/Graph Approach : Select and implement appropriate architectures (e.g., RAG using vector stores, Knowledge Graphs) to support intelligent, context-aware insights. • External Data Integration : Build mechanisms to ingest and structure data from public sources (e.g., competitor websites, industry reports, social sentiment) to augment internal insights. • Security & Governance : Ensure all layers (BI + GenAI) adhere to enterprise data governance, security, and compliance standards. • Cross-functional Collaboration : Work closely with Data Engineering, Analytics, and Product teams to ensure seamless integration and operationalization. Qualifications • 69 years of experience in BI architecture and analytics platforms, with at least 2 years working on GenAI, RAG, or LLM-based solutions. • Strong expertise in BI tools (e.g., Power BI, Tableau, Looker) and data modeling. • Experience with GenAI frameworks (e.g., LangChain, LlamaIndex, Semantic Kernel) and vector databases (e.g., Pinecone, FAISS, Weaviate). • Knowledge of graph-based data models and tools (e.g., Neo4j, RDF, SPARQL) is a plus. • Proficiency in Python or relevant scripting language for pipeline orchestration and AI integration. • Familiarity with web scraping and structuring external/third-party datasets. • Prior experience in CPG domain or large-scale KPI dashboarding preferred.

Posted 1 month ago

Apply

2.0 - 6.0 years

0 - 0 Lacs

karnataka

On-site

As a Power BI Developer, you will be an integral part of our dynamic team, contributing your expertise to design, develop, and implement advanced Power BI solutions that facilitate data-driven decision-making. Your role will involve close collaboration with business stakeholders to grasp their requirements and translate them into visually appealing and high-performance Power BI reports. Your responsibilities will span various key areas, including data modeling and analysis. You will be tasked with creating robust data models, utilizing advanced DAX for complex calculations, and effectively transforming and cleaning data using Power Query. Additionally, you will develop interactive Power BI reports with diverse visualizations, optimize report performance, and enforce data access control through Row-Level Security (RLS). Furthermore, you will oversee Power BI Service administration, managing capacities, licenses, and deployment strategies while integrating Power BI with other Microsoft tools for enhanced automation and data processing. Your expertise in cloud platforms like MS Fabric, Data Factory, and Data Lake will be crucial in optimizing data pipelines and scalability. In addition to your technical responsibilities, you will engage in collaboration with stakeholders to deliver actionable insights, mentor junior team members on best practices, and provide technical leadership by ensuring adherence to standards and deploying reports to production environments. To qualify for this role, you should possess 2 to 6 years of hands-on experience with Power BI and related technologies, demonstrating proficiency in data modeling, DAX, Power Query, visualization techniques, and SQL skills. Experience in ETL processes, cloud platforms, and strong problem-solving abilities are essential. Excellent communication skills and the ability to work both independently and collaboratively are also required. Preferred qualifications include experience with R or Python for custom visual development and certification in Power BI or related technologies. Please note that this position mandates working at our (South) Bangalore office for at least 4 out of 5 days, with no remote work option available. Local candidates are preferred, and relocation assistance will not be provided. This is a full-time position based in Bangalore (South), offering a competitive salary range of 500,000-1,200,000 INR per year. If you meet the qualifications and are eager to contribute to our team, we encourage you to apply before the deadline of April 15, 2025.,

Posted 1 month ago

Apply

2.0 - 6.0 years

0 Lacs

haryana

On-site

As a Data Engineering Specialist, you will be responsible for assessing, capturing, and translating complex business issues into structured technical tasks for the data engineering team. This includes designing, building, launching, optimizing, and extending full-stack data and business intelligence solutions. Your role will involve supporting the build of big data environments, focusing on improving data pipelines and data quality, and working with stakeholders to meet business needs. You will create data access tools for the analytics and data scientist team, conduct code reviews, assist other developers, and train team members as required. Additionally, you will ensure that developed systems comply with industry standards and best practices while meeting project requirements. To excel in this role, you should possess a Bachelor's degree in computer science engineering or equivalent, or relevant experience. Certification in cloud technologies, especially Azure, would be beneficial. You should have 2-3+ years of development experience in building and maintaining ETL/ELT pipelines on various sources and operational programming tasks. Experience with Apache data projects or cloud platform equivalents and proficiency in programming languages like Python, Scala, R, Java, Golang, Kotlin, C, or C++ is required. Your work will involve collaborating closely with data scientists, machine learning engineers, and stakeholders to understand requirements and develop data-driven solutions. Troubleshooting, debugging, and resolving issues within generative AI system development, as well as documenting processes, specifications, and training procedures will be part of your responsibilities. In summary, this role requires a strong background in data engineering, proficiency in cloud technologies, experience with data projects and programming languages, and the ability to collaborate effectively with various stakeholders to deliver high-quality data solutions.,

Posted 1 month ago

Apply

7.0 - 12.0 years

12 - 20 Lacs

Bengaluru

Hybrid

Microsoft Fabric & Azure Expertise Design and implement comprehensive data solutions using Microsoft Fabric, including Data Pipelines , Lakehouses , and Real-Time Analytics . Build and manage scalable OLAP architectures , including data layer design, dimensional modeling, and semantic models. Develop and optimize data pipelines within Fabric Workspaces for efficient ETL/ELT processing. Lead implementation of data governance practices , including IAM policies , RBAC , RLS , data masking , and encryption . Manage and optimize Fabric resources , including Fabric SKUs, OneLake, and Workspace configurations. Set up and manage Power BI apps integrated with Fabric for reporting and dashboard distribution. Develop and maintain Fabric Notebooks using PySpark , Python , and SQL for advanced data engineering and analytics use cases. Oversee Azure services integration such as App Registrations , IAM , and other platform services to ensure secure and efficient operation. Python Development Build robust, scalable Python scripts and applications with a strong focus on performance, reliability, and maintainability. Integrate external systems using RESTful APIs and automate workflows through Python-based orchestration. Utilize advanced Python libraries such as pandas , numpy , and others for data manipulation and transformation. Develop database integration scripts to connect with relational databases (e.g., SQL Server, MySQL, PostgreSQL, Oracle) and execute complex SQL and PL/SQL queries. Handle and transform complex JSON data structures for ingestion and processing. Collaboration & Leadership Work closely with cross-functional stakeholdersincluding data scientists, analysts, product managers, and business leaders—to understand requirements and deliver scalable solutions. Translate business needs into technical specifications and implement solutions that align with architectural best practices. Provide technical mentorship and guidance to junior team members and contribute to architectural decision-making. Qualifications Experience in data engineering, analytics, or platform engineering roles. Proven expertise in Microsoft Fabric and Azure Data Services. Strong command of Python programming and experience working with large-scale data systems. Experience designing and implementing OLAP systems, including semantic models and multidimensional cubes. Solid understanding of data governance, security, and access control in enterprise environments. Exceptional problem-solving, communication, and collaboration skills. Regular Tasks Required: Build and maintain MS Fabric data import, standardization, and governance procedures. Write effective Python code to tackle complex issues, but also use your business sense and analytical abilities to glean valuable insights from public databases Communicate clearly with team and help the organization in realizing its objectives Clearly express the reasoning and logic when writing code Fix bugs in the code and create thorough documentation Utilize your data analysis skills to develop and respond to important business queries using available datasets Effectively communicate with the team to comprehend the needs and provide the results Useful but non-mandatory skills: Hands-on experience with Generative AI tools, including LangChain, RAG, and vector databases. Experience with web scraping, browser automation, and workflow automation. Proficiency in traditional ML libraries (scikit-learn, TensorFlow, PyTorch, XGBoost). Experience and Educational requirements: Bachelor’s in Engineering or Masters in Computer Science (or equivalent experience) At least 2+ years of relevant experience as a data scientist 2+ years of Data analysis experience and a desire to have a significant impact on the field of artificial intelligence 2+ years of experience working extensively with Python programming Experience with MS Fabric, OneLake, and the Microsoft data analytics stack Experience with PySpark and Spark SQL Extensive experience working with Data Science/Analysis Familiarity with SQL, Rest API, Tableau and related technologies is desirable Excellent communication abilities to work with stakeholders and researchers successfully Strong data analytic abilities and business sense are required to draw the appropriate conclusions from the dataset, respond to those conclusions, and clearly convey the key findings Excellent problem-solving and analytical skills Fluent in conversational and written English communication skills Physical requirements: Normal, corrective vision range; ability to see color and to distinguish letters, numbers and symbols Frequently required to sit, stand, walk, talk, hear, bend and reach Ability to reach with hands and arms Occasionally lift and/or move up to 30lbs

Posted 1 month ago

Apply

7.0 - 11.0 years

0 Lacs

karnataka

On-site

You should have a minimum of 7 years of experience in Database warehouse / lake house programming and should have successfully implemented at least 2 end-to-end data warehouse / data lake projects. Additionally, you should have experience in implementing at least 1 Azure Data warehouse / lake house project end-to-end, converting business requirements into concept / technical specifications, and collaborating with source system experts to finalize ETL and analytics design. You will also be responsible for supporting data modeler developers in the design and development of ETLs and creating activity plans based on agreed concepts with timelines. Your technical expertise should include a strong background with Microsoft Azure components such as Azure Data Factory, Azure Synapse, Azure SQL Database, Azure Key Vault, MS Fabric, Azure DevOps (ADO), and Virtual Networks (VNets). You should also have expertise in Medallion Architecture for Lakehouses and data modeling in the Gold layer, along with a solid understanding of Data Warehouse design principles like star schema, snowflake schema, and data partitioning. Proficiency in MS SQL Database Packages, Stored procedures, Functions, procedures, Triggers, and data transformation activities using SQL is required, as well as knowledge in SQL loader, Data pump, and Import/Export utilities. Experience with data visualization or BI tools like Tableau, Power BI, capacity planning, environment management, performance tuning, and familiarity with cloud cloning/copying processes within Azure will be essential for this role. Knowledge of green computing principles and optimizing cloud resources for cost and environmental efficiency is also desired. You should possess excellent interpersonal and communication skills to collaborate effectively with technical and non-technical teams, communicate complex concepts, and influence key stakeholders. Additionally, analyzing demands, contributing to cost/benefit analysis, and estimation are part of the responsibilities. Preferred qualifications include certifications like Azure Solutions Architect Expert or Azure Data Engineer Associate. Skills required for this role include database management, Tableau, Power BI, ETL processes, Azure SQL Database, Medallion Architecture, Azure services, data visualization, data warehouse design, and Microsoft Azure technologies.,

Posted 1 month ago

Apply

10.0 - 14.0 years

0 Lacs

ahmedabad, gujarat

On-site

As a Solution Architect at Kanerika, you will collaborate with our sales, presales, and COE teams to provide technical expertise and support throughout the new business acquisition process. Your role will involve understanding customer requirements, presenting our solutions, and demonstrating the value of our products. In this high-pressure environment, maintaining a positive outlook and making strategic choices for career growth are essential. Your excellent communication skills, both written and verbal, will enable you to convey complex technical concepts clearly and effectively. Being a team player, customer-focused, self-motivated, and responsible individual who can work under pressure with a positive attitude is crucial for success in this role. Experience in managing and handling RFPs/ RFIs, client demos and presentations, and converting opportunities into winning bids is required. Having a strong work ethic, positive attitude, and enthusiasm to embrace new challenges are key qualities. You should be able to multitask, prioritize, and demonstrate good time management skills, as well as work independently with minimal supervision. A process-oriented and methodical approach with a quality-first mindset will be beneficial. The ability to convert a client's business challenges and priorities into winning proposals through excellence in technical solutions will be the key performance indicator for this role. Your responsibilities will include developing high-level architecture designs for scalable, secure, and robust solutions, selecting appropriate technologies, frameworks, and platforms for business needs, and designing cloud-native, hybrid, or on-premises solutions using AWS, Azure, or GCP. You will also ensure seamless integration between various enterprise applications, APIs, and third-party services, as well as design and develop scalable, secure, and performant data architectures on Microsoft Azure and/or new generation analytics platforms. To excel in this role, you should have at least 10 years of experience working in data analytics and AI technologies from consulting, implementation, and design perspectives. Certifications in data engineering, analytics, cloud, and AI will be advantageous. A Bachelor's in engineering/technology or an MCA from a reputed college is a must, along with prior experience working as a solution architect during the presales cycle. Soft skills such as communication, presentation, flexibility, and being hard-working are essential. Additionally, having knowledge of presales processes and a basic understanding of business analytics and AI will benefit you in this role at Kanerika. Join us at Kanerika and become part of a vibrant and diverse community where your talents are recognized, your growth is nurtured, and your contributions make a real impact. See the benefits section below for the perks you'll get while working for Kanerika.,

Posted 1 month ago

Apply

6.0 - 8.0 years

1 - 6 Lacs

Noida

Work from Office

Urgent Hiring... Microsoft Fabric Cloud Architect 6-8yrs Noida Immediate to 30 days Skills- Azure Cloud, MS Fabric, Py spark, DAX, Python, Azure Synapse, ADF, Data Bricks, MS-Fabric, ETL Pipelines.

Posted 2 months ago

Apply
Page 1 of 2
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies