Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
1.0 - 5.0 years
0 Lacs
pune, maharashtra
On-site
As an Occupational Therapist at our multidisciplinary clinical team, you will play a crucial role in supporting children with developmental delays, sensory processing issues, and other challenges. Your primary focus will be on designing and delivering customized therapy plans to help children build independence in daily living, motor skills, and functional participation at home, school, and in the community. - Assess children using standardized tools and clinical observation to identify therapy needs. - Develop and implement individualized intervention plans focused on functional goals. - Provide therapy for fine motor development, sensory integration, ADLs (Activities of Daily Living), and cognitive-perceptual skills. - Collaborate closely with parents, caregivers, and the interdisciplinary team (speech therapists, special educators, psychologists, etc.). - Guide and train parents for home-based interventions and monitor progress. - Maintain accurate documentation and reports in compliance with clinical protocols. - Participate in team meetings, training, and case discussions. - Contribute to awareness and outreach initiatives, as required. You should hold a Bachelor's or Master's degree in Occupational Therapy from a recognized institution and be registered with the appropriate regulatory body (e.g., AIOTA/RCI). Ideally, you will have 1-3 years of experience in the field, although freshers with strong clinical internships are welcome. Specialization in pediatrics or prior work experience with children will be considered an added advantage. If you are passionate about making a difference in the lives of children facing developmental challenges, we encourage you to share your resume at cjkaur@momsbelief.com.,
Posted 14 hours ago
5.0 - 10.0 years
10 - 20 Lacs
pune
Work from Office
We are looking for a skilled and experienced Data Engineer with hands-on expertise in Azure Data Services to join our growing team. The ideal candidate will be responsible for designing, developing, and maintaining scalable data pipelines and enterprise-grade data solutions using modern Azure tools and technologies. Azure Data Engineer Job Title : Azure Data Engineer Location : Pune Experience : 8+ years Key Responsibilities: Design, develop, and maintain scalable data pipelines using Azure Data Factory , Azure Synapse Analytics , Databricks , and Azure Data Lake Storage (ADLS) . Implement data ingestion, transformation, and integration processes from various sources (on-premises/cloud). Create and manage Azure resources for data solutions, including storage accounts, databases, and compute services. Develop and optimize SQL scripts, stored procedures, and views for data transformation and reporting. Ensure data quality, governance, and security standards are met using tools like Azure Purview , Azure Key Vault , and Role-Based Access Control (RBAC) . Collaborate with Data Scientists, BI Developers, and other stakeholders to deliver enterprise- grade data solutions. Monitor and troubleshoot data pipeline failures and performance issues. Document technical solutions and maintain best practices. Technical Skills Required: Azure Data Factory (ADF) Expertise in building pipelines, triggers, linked services, etc. Azure Synapse Analytics / SQL Data Warehouse Azure Databricks / Spark / PySpark Azure Data Lake (Gen2) Azure SQL / Cosmos DB / SQL Server Strong knowledge of SQL , T-SQL , and performance tuning Good understanding of ETL/ELT frameworks and data modeling concepts (Star/Snowflake schema) Experience with CI/CD pipelines using Azure DevOps Familiarity with tools like Git , ARM templates , Terraform (optional) Knowledge of Power BI integration is a plus Soft Skills & Additional Qualifications: Strong analytical and problem-solving skills Excellent communication and collaboration abilities Ability to work independently and lead junior team members Azure certifications (e.g., DP-203 ) are preferred Apply Now to be a part of a dynamic and forward-thinking data team on below mail id kiran.ghorpade@neutrinotechlabs.com
Posted 3 days ago
6.0 - 9.0 years
15 - 25 Lacs
hyderabad, chennai
Work from Office
We are seeking a highly skilled and experienced Azure Data Engineer to join our dynamic team in Chennai. The ideal candidate will have strong expertise in building, maintaining, and optimizing scalable data pipelines and architectures on Azure. You will collaborate with cross-functional stakeholders to deliver enterprise-grade data solutions that enable advanced analytics and business intelligence. Key Responsibilities Design, develop, and optimize scalable and reliable data pipelines using Azure Data Factory (ADF), Databricks, PySpark, and SQL. Implement data ingestion, transformation, cleansing, and integration solutions from multiple structured and unstructured data sources. Design and implement data models to support reporting and analytics, ensuring high performance, scalability, and data accuracy. Collaborate with data architects, analysts, and business stakeholders to translate requirements into end-to-end data solutions. Manage, monitor, and enhance data workflows for performance and cost efficiency within Azure cloud environments. Advocate and enforce best practices for coding, performance optimization, version control, and data governance. Troubleshoot and resolve issues in production data pipelines to maintain smooth business operations. Mandatory Skills Python, PySpark, SQL advanced proficiency in scripting, querying, and data transformation. Hands-on expertise with Azure services including Azure Data Factory (ADF), Databricks, Azure Storage, Synapse, and other cloud-native data services. Strong understanding and practical experience in Data Modelling (Dimensional, Relational, and Cloud-native models). Proven experience in developing ETL pipelines and large-scale data integration solutions. Excellent problem-solving skills with ability to work independently and in collaborative teams. Preferred Skills (Good to Have) Knowledge of CI/CD pipelines, DevOps for Data (Azure DevOps, GitHub Actions). Familiarity with Delta Lake, Lakehouse architecture, and modern data platforms. Exposure to performance tuning of PySpark and SQL queries. Strong communication and stakeholder engagement skills. Candidate Requirements 6 9 years of professional experience in Data Engineering, with at least 3+ years working on Azure cloud data services. Must be available to join immediately. Based in or willing to relocate to Chennai. Kindly acknowledge the mail with the below details and acceptance for the above Job description! Name: Contact Number: Primary Email: Date of Birth (DOB): PAN Number (Mandatory): Education: Current Organization - Payroll : Total IT Experience & Relevant Exp: Notice Period : Current CTC (Fixed + Variable): Expected CTC (Fixed): Counter Offer Details With DOJ: Current Work Location: Preferred Work Location: For Queries: Sanjeevan Natarajan sanjeevan.natarajan@careernet.in
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
delhi
On-site
You will be reporting to the Manager and your responsibilities will include designing, coding, and testing new data management solutions, along with supporting applications and interfaces. You will be required to architect data structures to provision and allow "Data as a Service". Supporting development activities in multiple DA&I and Connected Enterprise related projects for both internal and external customers will be part of your role. Additionally, you will need to develop and test infrastructure components in both Cloud and Edge-level environments. Monitoring industry trends, identifying opportunities to implement new technologies, managing the DevOps pipeline deployment model, implementing software in all environments, and collaborating with business systems analysts and product owners to define requirements are key aspects of this position. To be considered for this role, you should have a Bachelor's Degree in computer science, software engineering, management information systems, or a related field. You should have at least 1+ years of experience in systems development lifecycle, experience in data management concepts and implementations, and 3+ years of experience with Agile development methodologies and system/process documentation. Additionally, you should possess 5+ years of experience with SAP Data Services, Azure ADF, ADLS, SQL, Tabular models, or other domain-specific programming languages, familiarity with business concepts, and an understanding of the impact of data on business processes. Your temperament should reflect your ability to assist colleagues in working through change and support change management processes. A team-oriented approach, collaboration with both business and IT organizations, and the courage to share viewpoints openly and directly, while providing relevant information and feedback, are essential characteristics for this role. In this position, you will need to have the ability to work on issues of moderate scope, exercise judgment within defined practices, seek out relevant perspectives, propose solutions, manage competing demands, and distill information from different data sources to make recommendations for next steps. Your unwavering commitment to the standards of behavior set in the Code of Conduct, enthusiasm for partnership across the organization, and willingness to work in a team-oriented culture are critical role requirements. Rockwell Automation offers a hybrid work environment with the ability to collaborate and learn from colleagues in a global organization. The company provides a creative working environment, a competitive compensation package, great benefits, and a supportive atmosphere where you can grow with new challenges and development opportunities. Corporate Social Responsibility opportunities and support from the 24/7 employee assistance program are additional benefits of working at Rockwell Automation. The primary work location for this role is in Pune, India. Rockwell Automation is committed to building a diverse, inclusive, and authentic workplace, so even if your experience doesn't entirely align with every qualification in the job description, you are encouraged to apply as you may still be the right fit for this position or other roles within the organization.,
Posted 1 week ago
5.0 - 7.0 years
0 Lacs
bengaluru, karnataka, india
On-site
At Elanco (NYSE: ELAN) - it all starts with animals! As a global leader in animal health, we are dedicated to innovation and delivering products and services to prevent and treat disease in farm animals and pets. We're driven by our vision of Food and Companionship Enriching Life and our approach to sustainability - the Elanco Healthy Purpose - to advance the health of animals, people, the planet and our enterprise. At Elanco, we pride ourselves on fostering a diverse and inclusive work environment. We believe that diversity is the driving force behind innovation, creativity, and overall business success. Here, you'll be part of a company that values and champions new ways of thinking, work with dynamic individuals, and acquire new skills and experiences that will propel your career to new heights. Making animals lives better makes life better - join our team today! Your Role: Sr ML Ops engineer The MLOps engineer's role is service focused and will create data pipeline and engineering infrastructure to support our enterprise machine learning systems. This role will collaborate with data scientists and statisticians from various Elanco global business functions to facilitate and lead scientific and/or business knowledge discovery, insights, and forecasting. The MLOps Engineer will be responsible for designing, implementing, and maintaining machine learning infrastructure, pipelines, and workflows. This role will require a deep understanding of data management, software development, and cloud computing. Your Responsibilities: Deploy and maintain machine learning models, pipelines, and workflows in production environment. Re-package (deployment process) ML models that have been developed in the non-production ML environment by ML Teams for deployment to the production ML environment. Perform the required MLOps engineering development to refactor the non-production ML model implementation to an ML as Code implementation. Create, manage, and execute ServiceNow change requests in accordance with the Elanco IT Change Management process to manage the deployment of new models. Build and maintain machine learning infrastructure that is scalable, reliable, and efficient. Provide expert data PaaS on Azure storage big data platform services server-less architectures Azure SQL DB NoSQL databases and secure, automated data pipelines. What You Need to Succeed (minimum qualifications): Bachelor's or master's degree in computer science, Engineering, or related field. 5-7 years of experience in software engineering, data engineering, or ML engineering. What will give you a competitive edge (preferred qualifications): Strong programming experience in Python . Solid understanding of machine learning workflows and MLOps concepts. Experience with CI/CD, version control (Git/GitHub), and containerization (Docker, Kubernetes). Hands-on experience with Azure cloud services (Data Factory, ADLS, Azure SQL, etc.). Experience deploying ML models to production environments. Familiarity with databases (SQL/NoSQL) and data pipeline design (ETL/ELT). Ability to translate business requirements into technical implementations. Strong problem-solving and debugging skills. Additional Information: Travel:0% Location: India, Bangalore Don't meet every single requirementStudies have shown underrecognized groups are less likely to apply to jobs unless they meet every single qualification. At Elanco we are dedicated to building a diverse and inclusive work environment. If you think you might be a good fit for a role but don't necessarily meet every requirement, we encourage you to apply. You may be the right candidate for this role or other roles! Elanco is an EEO/Affirmative Action Employer and does not discriminate on the basis of age, race, color, religion, gender, sexual orientation, gender identity, gender expression, national origin, protected veteran status, disability or any other legally protected status
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
hyderabad, telangana
On-site
Genpact is a global professional services and solutions firm with over 125,000 employees in 30+ countries, driven by curiosity, agility, and a commitment to creating value for clients. We serve leading enterprises worldwide, leveraging our expertise in digital operations, data, technology, and AI. We are seeking a Lead Consultant- Databricks Developer to solve cutting-edge problems and meet functional and non-functional requirements. As a Databricks Developer, you will work closely with architects and lead engineers to design solutions and stay abreast of industry trends and standards. Responsibilities: - Stay updated on new technologies for potential application in service offerings. - Collaborate with architects and lead engineers to develop solutions. - Demonstrate knowledge of industry trends and standards. - Exhibit strong analytical and technical problem-solving skills. - Required experience in the Data Engineering domain. Minimum qualifications: - Bachelor's Degree in CS, CE, CIS, IS, MIS, or equivalent work experience. - Proficiency in Python or Scala, preferably Python. - Experience in Data Engineering with a focus on Databricks. - Implementation of at least 2 projects end-to-end in Databricks. - Proficiency in Databricks components like Delta lake, dbConnect, db API 2.0, and Databricks workflows orchestration. - Understanding of Databricks Lakehouse concept and its implementation. - Ability to create complex data pipelines and knowledge of data structure & algorithms. - Strong skills in SQL and spark-sql. - Experience in performance optimization and working on both batch and streaming data pipelines. - Extensive knowledge of Spark and Hive data processing framework. - Familiarity with cloud platforms like Azure, AWS, GCP, and related services. - Experience in writing unit and integration test cases. - Excellent communication skills and team collaboration experience. Preferred qualifications: - Knowledge of Unity catalog and basic governance. - Understanding of Databricks SQL Endpoint. - Experience with CI/CD for building Databricks job pipelines. - Exposure to migration projects for building Unified data platforms. - Familiarity with DBT, docker, and Kubernetes. Join us as a Lead Consultant in Hyderabad, India, on a full-time basis to contribute to our digital initiatives and shape the future of professional services.,
Posted 1 week ago
5.0 - 9.0 years
10 - 14 Lacs
hyderabad
Hybrid
A dynamic professional who can combine project management expertise with Power BI development skills, enabling actionable insights from Azure Data Lake Storage (ADLS). Strong Power BI development skills (data modeling, DAX, visualization design). Required Candidate profile It involves managing projects end-to-end, collaborating with cross-functional teams, and delivering high-quality analytical dashboards to support business decisions PMP/PRINCE2/Agile certified
Posted 1 week ago
5.0 - 9.0 years
10 - 14 Lacs
hyderabad
Hybrid
A dynamic professional who can combine project management expertise with Power BI development skills, enabling actionable insights from Azure Data Lake Storage (ADLS). Strong Power BI development skills (data modeling, DAX, visualization design). Required Candidate profile It involves managing projects end-to-end, collaborating with cross-functional teams, and delivering high-quality analytical dashboards to support business decisions PMP/PRINCE2/Agile certified
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You are a skilled Data Engineer with over 4 years of experience, seeking a role in Trivandrum (Hybrid) for a contract duration of 6+ months in IST shift. Your main responsibility will be to design and develop data warehouse solutions utilizing Azure Synapse Analytics, ADLS, ADF, Databricks, and Power BI. You will be involved in building and optimizing data pipelines, working with large datasets, and implementing automation using DevOps/CI/CD frameworks. Your expertise should lie in Azure, AWS, Terraform, ETL, Python, and data lifecycle management. You will collaborate on architecture frameworks and best practices while working on data warehouse design and implementation. In this role, you will be expected to design and develop data warehouse solutions using Azure Synapse Analytics, ADLS, ADF, Databricks, Power BI, and Azure Analysis Services. This will involve developing and optimizing SQL queries, working with SSIS for ETL processes, and handling challenging scenarios effectively in an onshore-offshore model. You will build and optimize data pipelines for ETL workloads, work with large datasets, and implement data transformation processes. You will utilize DevOps/CI/CD frameworks for automation, leveraging Infrastructure as Code (IaC) with Terraform and configuration management tools. Participation in architecture framework discussions, best practices, and the implementation and maintenance of Azure Data Factory pipelines for ETL projects will also be part of your responsibilities. Ensuring effective data ingestion, transformation, loading, validation, and performance tuning are key aspects of the role. Your skills should include expertise in Azure Synapse Analytics, ADLS, ADF, Databricks, Power BI, and Azure Analysis Services. Strong experience in SQL, SSIS, and Query Optimization is required. Hands-on experience with ETL pipelines, Data Warehousing, and Analytics, as well as proficiency in Azure, AWS, and DevOps/CI/CD frameworks are essential. Experience with Terraform, Jenkins, Infrastructure as Code (IaC), and strong knowledge of Python for data processing and transformation are also necessary. The ability to work in an ambiguous environment and translate vague requirements into concrete deliverables is crucial for this role.,
Posted 1 week ago
1.0 - 5.0 years
0 Lacs
telangana
On-site
Rockwell Automation is a global technology leader focused on helping the world's manufacturers be more productive, sustainable, and agile. With a team of over 28,000 employees dedicated to making the world a better place every day, we take pride in the special contributions we make. Our customers are incredible companies that play vital roles in feeding the world, providing life-saving medicine globally, and promoting clean water and green mobility. Our people are enthusiastic problem solvers who are energized by the positive impact our work has on the world. We are seeking individuals who are innovative, forward-thinking, and adept at solving complex problems to join our team. If you are passionate about doing your best work, we invite you to be a part of our team. **Job Description:** As a member of our team, you will report to the Manager and be responsible for the following: - Designing, coding, and testing new data management solutions, including supporting applications and interfaces. - Architecting data structures to facilitate "Data as a Service." - Supporting development activities in various projects related to DA&I and Connected Enterprise for both internal and external customers. - Developing and testing infrastructure components in Cloud and Edge-level environments. - Monitoring industry trends and identifying opportunities to implement new technologies. - Managing the DevOps pipeline deployment model and implementing software in all environments. - Collaborating with business systems analysts and product owners to define requirements. **Experience And Education:** - Bachelor's Degree in computer science, software engineering, management information systems, or a related field. - 1+ years of experience in systems development lifecycle. - Experience in data management concepts and implementations. - 3+ years of experience with Agile development methodologies and system/process documentation. - Experience with server-side architectures and containerization. - 5+ years of experience with SAP Data Services, Azure ADF, ADLS, SQL, Tabular models, or other domain-specific programming languages. - Familiarity with business concepts and the impact of data on business processes. **Temperament:** - Ability to support colleagues through change and change management processes. - Team orientation and collaboration skills with both business and IT organizations. - Courage to share viewpoints openly and provide feedback respectfully. - Ability to convey information effectively in challenging situations. **Role Requirements:** - Commitment to upholding the standards of behavior set in the Code of Conduct. - Enthusiasm for partnership across all levels of the organization. - Ability to work effectively in a team-oriented culture. - Willingness to pursue personal learning and skill development opportunities. **Benefits:** - Opportunity to collaborate and learn from colleagues in a global organization. - Competitive compensation package and benefits. - Hybrid work-from-home and office environment. - Corporate Social Responsibility opportunities. - Support from the 24/7 employee assistance program. Rockwell Automation is committed to fostering a diverse, inclusive, and authentic workplace. We encourage candidates who are excited about the role to apply, even if their experience does not align perfectly with every qualification listed in the job description. You may be the perfect fit for this position or other roles within our organization.,
Posted 2 weeks ago
5.0 - 14.0 years
0 Lacs
kolkata, west bengal
On-site
You should have a minimum of 5 to 14 years of total IT experience with a strong technical background in tools like Azure Data Factory, Databricks, Azure Synapse, SQL DB, ADLS, among others. Your role will involve collaborating closely with business stakeholders to understand and fulfill data requirements effectively. Proficiency in utilizing Azure services and tools for data ingestion, egress, and transformation from various sources is essential. Your responsibilities will include delivering ETL/ELT solutions encompassing data extraction, transformation, cleansing, integration, and management. You should have hands-on experience in implementing batch and near real-time data ingestion pipelines. Working on an event-driven cloud platform for cloud services and applications, data integration, pipeline management, serverless infrastructure for data warehousing, and workflow orchestration using Azure Cloud Data Engineering components like Databricks and Synapse is crucial. Excellent written and verbal communication skills are necessary for effective interaction within the team and with stakeholders. It would be advantageous if you have proven experience working with large cross-functional teams and possess good communication skills. Familiarity with cloud migration methodologies, Azure DevOps, Github, and being an Azure Certified Data Engineer are desirable. Additionally, you should demonstrate the ability to independently lead customer calls, take ownership of tasks, and collaborate effectively with the team.,
Posted 2 weeks ago
10.0 - 15.0 years
30 - 45 Lacs
hyderabad, chennai, bengaluru
Work from Office
Job Title: Azure Data Tech Lead & Data Architect Role Overview Lead the design, development, and maintenance of advanced data solutions on the Azure platform, combining hands-on architecture, technical leadership, and project guidance. Collaborate closely with stakeholders, mentor data engineering teams, and drive the strategic adoption of Azure data technologies. Key Responsibilities Architect and implement scalable data solutions using Azure services (Azure Data Factory, Azure Synapse, Databricks, ADLS, Azure SQL Database, Power BI) Develop conceptual, logical, and physical data models, ensuring optimal data storage, retrieval, and governance. Lead workshops and technical sessions to define requirements, data ingestion, validation, modeling, visualization, and analytics. Manage and mentor data engineers and technical teams, guiding project execution and solution delivery. Oversee data pipeline creation, performance optimization, and troubleshooting for Azure data systems. Drive data governance, security, and compliance initiatives aligned with organizational standards. Collaborate with business and IT stakeholders, translating business objectives into data architecture and actionable solutions. Support project estimation, planning, and agile delivery of complex Azure data projects. Proactively optimize cost, resources, and scalability of data infrastructure. Qualifications Bachelors or Masters degree in Computer Science, Information Technology, or related discipline. 68+ years experience in data engineering, architecture, or Azure data platform roles. Proven expertise in Azure cloud data services (ADF, ADLS, AzureSQL, Synapse, Databricks, Power BI). Hands-on experience with data modeling, ETL/ELT pipelines, data warehousing, and big data technologies. Strong leadership, project management, and mentoring experience. Effective communicator able to engage both technical and non-technical stakeholders. Experience with on-premises Microsoft technologies (SQL Server, SSIS, SSAS) is a plus. Azure data certifications (e.g., DP200, DP201) preferred. Skills Azure Data Factory, Synapse, Databricks, ADLS, Azure SQL, Power BI. Data architecture, warehousing, governance, and compliance. Leadership, mentoring, and cross-team collaboration. Data pipeline design and troubleshooting, cost optimization. Scripting (Python, PowerShell), and database connectivity. Kindly acknowledge the mail with the below details and acceptance for the above Job description! Name: Contact Number: Primary Email: Date of Birth (DOB): PAN Number (Mandatory): Education: Current Organization - Payroll : Total IT Experience & Relevant Exp: Notice Period : Current CTC (Fixed + Variable): Expected CTC (Fixed): Counter Offer Details With DOJ: Current Work Location: Preferred Work Location: For Queries: Sanjeevan Natarajan sanjeevan.natarajan@careernet.in
Posted 2 weeks ago
6.0 - 11.0 years
20 - 35 Lacs
bengaluru
Work from Office
Terraform Docker Kubernetes Jenkins SQL Azure cloud ADLS Knowledge (No exp)
Posted 2 weeks ago
4.0 - 9.0 years
17 - 32 Lacs
pune, bangalore rural, bengaluru
Work from Office
Big 4 hiring in large numbers in Bangalore/ Pune for below role Please call on 7208835287 / 7208835290 /7738402343 send cv on it@contactxindia.com Role & responsibilities Mandatory Skills • Bachelors or higher degree in Computer Science or a related discipline; or equivalent (minimum 4+ years work experience). • At least 3+ years of consulting or client service delivery experience on Azure Microsoft data engineering. • At least 1+ years of experience in developing data ingestion, data processing and analytical pipelines for bigdata, relational databases such as SQL server and data warehouse solutions such as Synapse/Azure Databricks, Microsoft Fabric • Hands-on experience implementing data ingestion, ETL and data processing using Azure services: Fabric, onelake, ADLS, Azure Data Factory, Azure Functions, services in Microsoft Fabric etc. • Minimum of 1+ years of hands-on experience in Azure and Big Data technologies such as Fabric, databricks, Python, SQL, ADLS/Blob, pyspark/Spark SQL. • Minimum of 1+ years of RDBMS experience • Experience in using Big Data File Formats and compression techniques. • Experience working with Developer tools such as Azure DevOps, Visual Studio Team Server, Git, etc. Primary Roles and Responsibilities An Azure Data Engineer is responsible for designing, building, and maintaining the data infrastructure for an organization using Azure cloud services. This includes creating data pipelines, integrating data from various sources, and implementing data security and privacy measures. The Azure Data Engineer will also be responsible for monitoring and troubleshooting data flows and optimizing data storage and processing for performance and cost efficiency. Preferred Skills • Experience developing and deploying ETL solutions on Azure cloud using ADF, Notebooks, Synapse analytics, Azure functions and other services. • Experience developing and deploying ETL solutions on Azure cloud using services in Microsoft Fabric . • Microsoft certifications role based. (DP600, DP203,DP900, AI102, AI900..) • Knowledge of Microsoft Powerbi, reports/dashboards and generate insights for business users. • Knowledge of Azure RBAC and IAM. Understanding access controls and security on Azure Cloud. • Inclined with Microsoft vision and road map around latest tools and technologies in market. Preferred candidate profile
Posted 2 weeks ago
6.0 - 10.0 years
0 Lacs
kolkata, west bengal
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture, and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself and a better working world for all. EY's Consulting Services is a unique, industry-focused business unit that provides a broad range of integrated services leveraging deep industry experience with strong functional and technical capabilities and product knowledge. EY's financial services practice offers integrated Consulting services to financial institutions and other capital markets participants. Within EY's Consulting Practice, the Data and Analytics team solves big, complex issues and capitalizes on opportunities to deliver better working outcomes that help expand and safeguard businesses, now and in the future. This way, we help create a compelling business case for embedding the right analytical practice at the heart of clients" decision-making. **The Opportunity** As a Senior Designer and Developer working with Informatica Intelligent Cloud Services (IICS), you will play a crucial role in designing, developing, and managing complex data integration workflows involving multiple sources such as files and tables. Your responsibilities will span across various data sources to ensure seamless data movement and transformation for analytics and business intelligence purposes. **Key Roles and Responsibilities of an IICS Senior Designer and Developer:** - **Designing and Developing Data Integration Solutions:** - Develop ETL (Extract, Transform, Load) mappings and workflows using Informatica Cloud IICS to integrate data from various sources like files, multiple database tables, cloud storage, and APIs. - Configure synchronization tasks involving multiple database tables to ensure efficient data extraction and loading. - Build reusable mapping templates for different data loads including full, incremental, and CDC loads. - **Handling Multiple Data Sources:** - Work with structured, semi-structured, and unstructured data sources including Oracle, SQL Server, Azure Data Lake, Azure Blob Storage, and more. - Manage file ingestion tasks to load large datasets from on-premises systems to cloud data lakes or warehouses. - Use various cloud connectors and transformations to process and transform data efficiently. - **Data Quality, Governance, and Documentation:** - Implement data quality and governance policies to ensure data accuracy, integrity, and security. - Create detailed documentation such as source-to-target mappings, ETL design specifications, and data migration strategies. - Develop audit frameworks to track data loads and support compliance requirements. - **Project Planning and Coordination:** - Plan and monitor ETL development projects, coordinate with cross-functional teams, and communicate effectively across organizational levels. - Report progress, troubleshoot issues, and coordinate deployments. - **Performance Tuning and Troubleshooting:** - Optimize ETL workflows and mappings for performance. - Troubleshoot issues using IICS frameworks and collaborate with support teams as needed. - **Leadership and Mentoring (Senior Role Specific):** - Oversee design and development efforts, review work of junior developers, and ensure adherence to best practices. - Lead the creation of ETL standards and methodologies to promote consistency across projects. **Summary of Skills and Tools Commonly Used:** - Informatica Intelligent Cloud Services (IICS), Informatica Cloud Data Integration (CDI) - SQL, PL/SQL, API integrations (REST V2), ODBC connections, Flat Files, ADLS, Sales Force Netzero - Cloud platforms: Azure Data Lake, Azure Synapse, Snowflake, AWS Redshift - Data modeling and warehousing concepts - Data quality tools and scripting languages - Project management and documentation tools In essence, a Senior IICS Designer and Developer role requires technical expertise in data integration across multiple sources, project leadership, and ensuring high-quality data pipelines to support enterprise BI and analytics initiatives. **What We Look For:** We are seeking a team of individuals with commercial acumen, technical experience, and enthusiasm to learn new things in a fast-moving environment. Join a market-leading, multi-disciplinary team of professionals and work with leading businesses across various industries. **What Working at EY Offers:** At EY, you will work on inspiring and meaningful projects, receive support, coaching, and feedback from engaging colleagues, and have opportunities for skill development and career progression. You will have the freedom and flexibility to handle your role in a way that suits you best. Join EY in building a better working world, creating long-term value for clients, people, and society, and fostering trust in the capital markets through data-driven solutions and diverse global teams.,
Posted 2 weeks ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
At Capgemini Invent, we believe in the power of diversity to drive change. As inventive transformation consultants, we leverage our strategic, creative, and scientific capabilities to collaborate closely with clients in delivering cutting-edge solutions. Join our team to lead transformation initiatives tailored to address both current challenges and future opportunities. Our approach is informed and validated by science and data, supercharged by creativity and design, and supported by purpose-driven technology. We are seeking a candidate with strong expertise in SSAS Tabular Model, proficient in DAX queries and query optimization, adept at resolving performance issues, database tuning, and data replication techniques using Microsoft SQL Server. The ideal candidate will have a solid background in working with Stored Procedures, Functions, Triggers, Views, and Data Warehousing, demonstrating clear understanding of concepts such as creating Facts and dimensions. The role requires significant experience in Azure SQL Database, Azure Data Factory (ADF), Azure Databricks (ADB), Azure Synapse, T-SQL, Azure SQL Data Warehouse (DWH), Azure Data Lake Storage (ADLS), SparkSQL/PySpark, and other Azure services for database management, storage, security, and development of Business Intelligence solutions. Familiarity with Microsoft Fabric is considered beneficial, along with proficiency in writing ADB/Synapse notebooks. Additionally, familiarity with Azure functions, Azure Streaming Analytics, Document DB (or Cosmos), MDS (SQL Master Data Service), and Graph DB is preferred. The successful candidate should possess excellent skills in Business Intelligence, problem-solving, analytics, reporting, and visualization. Strong communication skills are essential, as the role involves direct interaction with clients as an individual contributor. Capgemini is a global leader in business and technology transformation, supporting organizations in accelerating their digital and sustainable transition while delivering tangible impact for enterprises and society. With a team of over 340,000 professionals in more than 50 countries, Capgemini leverages its 55-year heritage to unlock the value of technology for its clients, offering end-to-end services and solutions spanning from strategy and design to engineering. The organization's capabilities in AI, cloud, and data, coupled with deep industry expertise and a strong partner ecosystem, enable it to address diverse business needs effectively. In 2023, the Group reported global revenues of 22.5 billion.,
Posted 2 weeks ago
0.0 years
0 Lacs
noida, uttar pradesh, india
On-site
Why Join Us Are you inspired to grow your career at one of India's Top 25 Best Workplaces in IT industry Do you want to do the best work of your life at one of the fastest growing IT services companies Do you aspire to thrive in an award-winning work culture that values your talent and career aspirations It's happening right here at Iris Software. About Iris Software At Iris Software, our vision is to be our client's most trusted technology partner, and the first choice for the industry's top professionals to realize their full potential. With over 4,300 associates across India, U.S.A, and Canada, we help our enterprise clients thrive with technology-enabled transformation across financial services, healthcare, transportation & logistics, and professional services. Our work covers complex, mission-critical applications with the latest technologies, such as high-value complex Application & Product Engineering, Data & Analytics, Cloud, DevOps, Data & MLOps, Quality Engineering, and Business Automation. Working at Iris Be valued, be inspired, be your best. At Iris Software, we invest in and create a culture where colleagues feel valued, can explore their potential, and have opportunities to grow. Our employee value proposition (EVP) is about Being Your Best - as a professional and person. It is about being challenged by work that inspires us, being empowered to excel and grow in your career, and being part of a culture where talent is valued. We're a place where everyone can discover and be their best version. Job Description Job Summary: We are seeking a highly skilled and experienced Lead Data Engineer to join our data engineering team. The ideal candidate will have a strong background in designing and deploying scalable data pipelines using Azure technologies, Spark, Flink, and modern data lakehouse architectures. This role demands hands-on technical expertise, leadership in managing offshore teams, and a strategic mindset to drive data-driven decision-making across financial and regulatory domains Key Responsibilities: . Design, develop, and deploy scalable batch and streaming data pipelines using PySpark, Flink, Scala, SQL, and Redis. . Lead migration of complex on-premise workflows to Azure cloud ecosystem (Databricks, ADLS, Azure Data Factory), optimizing infrastructure and deployment processes. . Implement performance tuning strategies to reduce job runtimes and enhance data reliability, including optimization of Unity Catalog tables. . Collaborate with product stakeholders to deliver high-priority data features and ensure alignment with business goals. . Manage and mentor an 8-member offshore team, fostering best practices in data engineering and agile development. . Conduct internal training sessions on modern data architecture, cloud-native deployments, and data engineering best practices. Required Skills & Technologies: . Big Data Tools: PySpark, Spark, Flink, Hive, Hadoop, Delta Lake, Streaming, ETL . Cloud Platforms: Azure (ADF, Databricks, ADLS, Event Hub), AWS (S3) . Orchestration & DevOps: Airflow, Docker, Kubernetes, GitHub Actions, Jenkins . Programming Languages: Python, Scala, SQL, Shell . Other Tools: Redis, Solace, MQ, Kafka, Grafana, Postman . Soft Skills: Team Leadership, Agile Methodologies, Stakeholder Management, Technical Training Certifications (Good to have): . Databricks Certified: Data Engineer Associate, Lakehouse Fundamentals . Microsoft Certified: Azure Fundamentals (AZ-900), Azure Data Fundamentals (DP-900) Preferred Qualifications: . Bachelor's degree in Engineering (E.C.E.) with strong academic performance. Proven experience in financial data pipelines, regulatory reporting, and risk analytics Mandatory Competencies Big Data - Big Data - Pyspark Big Data - Big Data - SPARK Beh - Communication Big Data - Big Data - HIVE Programming Language - Scala - Scala Big Data - Big Data - Hadoop DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Containerization (Docker, Kubernetes) DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - Jenkins Cloud - AWS - AWS S3, S3 glacier, AWS EBS DevOps/Configuration Mgmt - DevOps/Configuration Mgmt - GitLab,Github, Bitbucket Perks and Benefits for Irisians At Iris Software, we offer world-class benefits designed to support the financial, health and well-being needs of our associates to help achieve harmony between their professional and personal growth. From comprehensive health insurance and competitive salaries to flexible work arrangements and ongoing learning opportunities, we're committed to providing a supportive and rewarding work environment. Join us and experience the difference of working at a company that values its employees success and happiness.
Posted 2 weeks ago
7.0 - 11.0 years
0 Lacs
karnataka
On-site
NTT DATA is looking for a Sr. Data Modeler to join their team in Bangalore, Karntaka (IN-KA), India. As a Sr. Data Modeler, your primary responsibility will be to design and implement dimensional (star/snowflake) and 3NF data models that are optimized for analytical and reporting needs in Azure Synapse and Power BI. You will also be required to perform STM (source to target mapping) from the data source to multiple layers in the data lake. Additionally, you will analyze and optimize Spark SQL queries and collaborate with cross-functional teams to ensure that the data models align with business requirements. The ideal candidate should have at least 7 years of experience in SQL and PySpark. Hands-on experience with Azure Synapse, ADLS, Delta format, and metadata-driven data pipelines is essential. You should also be experienced in implementing dimensional (star/snowflake) and 3NF data models, as well as in PySpark and Spark SQL, including query optimization and performance tuning. Experience in writing complex SQL, performing Source-to-Target Mapping (STM), and familiarity with CI/CD practices in Git and Azure DevOps are also required. In this role, you will be responsible for maintaining version control and CI/CD pipelines in Git and Azure DevOps, as well as integrating Azure Purview to enable access controls and implementing row-level security. Strong problem-solving and analytical skills for debugging and optimizing data pipelines in Azure Synapse are essential. If you are a passionate and innovative individual looking to be part of a forward-thinking organization, apply now to join NTT DATA and be a part of their inclusive and adaptable team dedicated to long-term success and digital transformation. Visit us at us.nttdata.com.,
Posted 2 weeks ago
6.0 - 10.0 years
30 - 35 Lacs
bengaluru
Work from Office
We are seeking an experienced PySpark Developer / Data Engineer to design, develop, and optimize big data processing pipelines using Apache Spark and Python (PySpark). The ideal candidate should have expertise in distributed computing, ETL workflows, data lake architectures, and cloud-based big data solutions. Key Responsibilities: Develop and optimize ETL/ELT data pipelines using PySpark on distributed computing platforms (Hadoop, Databricks, EMR, HDInsight). Work with structured and unstructured data to perform data transformation, cleansing, and aggregation. Implement data lake and data warehouse solutions on AWS (S3, Glue, Redshift), Azure (ADLS, Synapse), or GCP (BigQuery, Dataflow). Optimize PySpark jobs for performance tuning, partitioning, and caching strategies. Design and implement real-time and batch data processing solutions. Integrate data pipelines with Kafka, Delta Lake, Iceberg, or Hudi for streaming and incremental updates. Ensure data security, governance, and compliance with industry best practices. Work with data scientists and analysts to prepare and process large-scale datasets for machine learning models. Collaborate with DevOps teams to deploy, monitor, and scale PySpark jobs using CI/CD pipelines, Kubernetes, and containerization. Perform unit testing and validation to ensure data integrity and reliability. Required Skills & Qualifications: 6+ years of experience in big data processing, ETL, and data engineering. Strong hands-on experience with PySpark (Apache Spark with Python). Expertise in SQL, DataFrame API, and RDD transformations. Experience with big data platforms (Hadoop, Hive, HDFS, Spark SQL). Knowledge of cloud data processing services (AWS Glue, EMR, Databricks, Azure Synapse, GCP Dataflow). Proficiency in writing optimized queries, partitioning, and indexing for performance tuning. Experience with workflow orchestration tools like Airflow, Oozie, or Prefect. Familiarity with containerization and deployment using Docker, Kubernetes, and CI/CD pipelines. Strong understanding of data governance, security, and compliance (GDPR, HIPAA, CCPA, etc.). Excellent problem-solving, debugging, and performance optimization skills.
Posted 2 weeks ago
12.0 - 16.0 years
0 Lacs
hyderabad, telangana
On-site
About Inspire Brands: Inspire Brands is looking for a Data Architect, a self-driven senior technical professional to collaborate with our business, Product and Development teams and help qualify, identify, and build Data & Analytics solutions. As part of the Enterprise Data Team, this position is responsible for actively defining robust, efficient, scalable and innovation solutions that are custom fit to our needs. Job Summary: Play a key role working with product and business teams to understand their strategic, business, and technical needs. Define technical architecture roadmaps which help realize business values in both tactical and strategic ways. Define and continuously update architecture principles in line with business and technology needs. Design solutions involving data architectures based on established principles. Architectures include but are not limited to data lake, data fabric, dimensional marts, MDM, etc. Create data flow diagrams and details to establish data lineage and help educate team members in a clear and articulate manner. Partner with technical leads to identify solution components and latest data related technologies that best meet business requirements. Identify gaps in the environment, process, and skills with the intent to address, educate, and bring teams together to drive results. Become a thought leader by educating and sharing best practices, architecture patterns, building deep relationships with technical leaders, contributing to publications, white papers, conferences, etc. Provide support to enterprise data infrastructure management and analysts to ensure development of efficient data systems utilizing established standards, procedures and methodologies. Create cloud data models and database architecture using database deployment, monitoring automation code, and for long term technical viability of cloud master data management integrated repositories. Suggest ideas to improve system performance and cost reduction. Communicate data architecture to stakeholders and collaborate and coordinate with other technical staff. Ensures adherence to Enterprise data movement, quality and accountability standards in technology. Conducts data driven analyses of usage of detailed data elements across the business domain to provide optimal data provisioning patterns across the application space. Possesses excellent quantitative / analytic skills and are able to influence strategic direction, as well as develop tactical plans. Improving and streamlining processes regarding data flow and data quality to improve data accuracy, viability and value. Education Requirements: Minimum 4 Year / Bachelor's Degree. Preferred: BS in Computer Science. EXPERIENCE QUALIFICATION: Minimum Years of Experience: 12 + Years. Must have: Strong in areas he worked on especially in DE and DG space :: DE (Data Engineering) and DG (Data Governance). Strong in Modern Warehouse and worked in a similar environment. A strong candidate to support PODS based in HSC but will need support in handling and initiating projects. Product-Oriented Delivery Structure (Agile PODS). Excellent communication skills. REQUIRED KNOWLEDGE, SKILLS or ABILITIES: Strong experience in Azure services including but not limited to ADLS, ADF, Event Hubs, Functions, etc. Exhaustive experience in Data bricks, Snowflake, Apache Airflow, etc. Exceptional interpersonal skills with the ability to communicate verbal, written, and presentation skills at all levels, across multiple functions, and drive participation and collaboration. Demonstrated ability in working with key executives and stakeholders to solve and drive business outcomes. Deep understanding of processes related to data solution SDLC with clear ability to improve processes where necessary. Strong analytical and quantitative skills. Enjoys seeing the impact of work in the organization, highly motivated and comfortable with ambiguity. Maverick approach to work with a passion for technology and ways of working trends.,
Posted 2 weeks ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
You should hold a Bachelors or higher degree in Computer Science or a related discipline, or possess equivalent qualifications with a minimum of 4+ years of work experience. Additionally, you should have at least 1+ years of consulting or client service delivery experience specifically related to Azure Microsoft Fabric. Your role will involve 1+ years of experience in developing data ingestion, data processing, and analytical pipelines for big data. This includes working with relational databases like SQL server and data warehouse solutions such as Synapse/Azure Databricks. You must have hands-on experience in implementing data ingestion, ETL, and data processing using various Azure services such as ADLS, Azure Data Factory, Azure Functions, and services in Microsoft Fabric. A minimum of 1+ years of hands-on experience in Azure and Big Data technologies is essential. This includes proficiency in Java, Python, SQL, ADLS/Blob, pyspark/SparkSQL, and Databricks. Moreover, you should have a minimum of 1+ years of experience in working with RDBMS, as well as familiarity with Big Data File Formats and compression techniques. Your expertise should also extend to using Developer tools like Azure DevOps, Visual Studio Team Server, Git, etc. This comprehensive skill set will enable you to excel in this role and contribute effectively to the team.,
Posted 2 weeks ago
5.0 - 7.0 years
0 Lacs
pune, maharashtra, india
Remote
At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities, collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow. Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Your Role IT experience with a minimum of 5+ years of experience in creating data warehouses, data lakes, ETL/ELT, data pipelines on cloud. Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. preferably - Life Sciences Domain Experience with cloud storage, cloud database, cloud Data ware housing and Data Lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Experience with cloud storage, cloud database, cloud Data ware housing and Data Lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Your Profile Able to contribute in making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. Knowledge on IOT and real time streaming would be added advantage. Lead architectural/technical discussions with client. Excellent communication and presentation skills. What you will love about working here . We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. . At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. . Equip yourself with valuable certifications in the latest technologies such as Generative AI. Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem.
Posted 2 weeks ago
1.0 - 3.0 years
0 Lacs
bengaluru, karnataka, india
Remote
Capgemini Invent Capgemini Invent is the digital innovation, consulting and transformation brand of the Capgemini Group, a global business line that combines market leading expertise in strategy, technology, data science and creative design, to help CxOs envision and build what's next for their businesses. Your Role Should have developed/Worked for atleast 1 Gen AI project. Has data pipeline implementation experience with any of these cloud providers - AWS, Azure, GCP. Experience with cloud storage, cloud database, cloud data warehousing and Data lake solutions like Snowflake, Big query, AWS Redshift, ADLS, S3. Has good knowledge of cloud compute services and load balancing. Has good knowledge of cloud identity management, authentication and authorization. Proficiency in using cloud utility functions such as AWS lambda, AWS step functions, Cloud Run, Cloud functions, Azure functions. Experience in using cloud data integration services for structured, semi structured and unstructured data such as Azure Databricks, Azure Data Factory, Azure Synapse Analytics, AWS Glue, AWS EMR, Dataflow, Dataproc. Your Profile Good knowledge of Infra capacity sizing, costing of cloud services to drive optimized solution architecture, leading to optimal infra investment vs performance and scaling. Able to contribute to making architectural choices using various cloud services and solution methodologies. Expertise in programming using python. Very good knowledge of cloud Dev-ops practices such as infrastructure as code, CI/CD components, and automated deployments on cloud. Must understand networking, security, design principles and best practices in cloud. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI. About Capgemini Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, cloud and data, combined with its deep industry expertise and partner ecosystem. The Group reported 2023 global revenues of 22.5 billion.
Posted 2 weeks ago
0.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Data Architect Primary Skills ETL Fundamentals, SQL, ADLS G2, Data Factory, Databricks Job requirements Data Architect Show more Show less
Posted 2 weeks ago
0.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Ready to shape the future of work At Genpact, we don&rsquot just adapt to change&mdashwe drive it. AI and digital innovation are redefining industries, and we&rsquore leading the charge. Genpact&rsquos , our industry-first accelerator, is an example of how we&rsquore scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to , our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, tech-driven environment, love solving real-world problems, and want to be part of a team that&rsquos shaping the future, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation , our teams implement data, technology, and AI to create tomorrow, today. Get to know us at and on , , , and . Inviting applications for the role of Principal Consultant, Azure Data Engineer Responsibilities Strong knowledge in building Pipelines in Azure Data Factory or Azure Synapse Analytics. Knowledge in Azure Data Bricks and Azure Synapse Analytics for ingesting data through different sources Good at writing SQL Queries on SQL Database and SQL DWH. Knowledge in design, development, testing, implementation of Azure Data Stack technologies. Expert level knowledge of SQL DB & Data warehouse. Knowledge of Azure Data Lake (Blob and ADLS) is mandatory. Should be able to do perform querying in SQL Database and SQL DWH. Knowledge of Azure Data Lake is required . Should be strong in either Python or Scala Programming Languages. Experience in various ETL techniques and frameworks. Ability to both work in team and to deliver and accept peer review. Understanding Machine Learning Algorithms and Power BI is an added advantage. Experience in GenAI project Qualifications we seek in you! Minimum qualifications Graduate Preferred qualifications Personal drive and positive work ethic to deliver results within deadlines and in demanding situations. Flexibility to adapt to a variety of engagement types, working hours and work environments and locations. Excellent communication skills. Why join Genpact Be a transformation leader - Work at the cutting edge of AI, automation, and digital innovation Make an impact - Drive change for global enterprises and solve business challenges that matter Accelerate your career - Get hands-on experience, mentorship, and continuous learning opportunities Work with the best - Join 140,000+ bold thinkers and problem-solvers who push boundaries every day Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the tech shapers and growth makers at Genpact and take your career in the only direction that matters: Up. Let&rsquos build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color , religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 weeks ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |