Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
Role Overview: As an Experienced SSIS Developer / SQL Developer at our company, you will be responsible for designing, developing, and maintaining robust data integration solutions using SQL Server Integration Services (SSIS). You should have a strong background in SQL development and a deep understanding of ETL processes. Your main tasks will include designing, developing, and maintaining ETL processes using SSIS to extract, transform, and load data from various sources into data warehouses or databases. Additionally, you will be required to implement complex data transformations and integrations to meet business requirements. You will also need to write and optimize SQL queries and stored procedures for data extraction, manipulation, and reporting, as well as create and maintain database objects, such as tables, views, and indexes, to support ETL processes and data analysis. Monitoring and optimizing SSIS package performance to ensure efficient data processing will also be a part of your role. Key Responsibilities: - Design, develop, and maintain robust data integration solutions using SQL Server Integration Services (SSIS) - Develop ETL processes to extract, transform, and load data from various sources into data warehouses or databases - Implement complex data transformations and integrations to meet business requirements - Write and optimize SQL queries and stored procedures for data extraction, manipulation, and reporting - Create and maintain database objects, such as tables, views, and indexes, to support ETL processes and data analysis - Monitor and optimize SSIS package performance to ensure efficient data processing Qualifications Required: - Proven experience as an SSIS Developer / SQL Developer - Strong background in SQL development and ETL processes - Proficiency in writing and optimizing SQL queries and stored procedures - Experience in creating and maintaining database objects - Familiarity with monitoring and optimizing SSIS package performance (Note: The additional details of the company were not mentioned in the job description provided),
Posted 1 day ago
12.0 - 16.0 years
0 Lacs
maharashtra
On-site
You are a strategic thinker passionate about driving solutions in BI and Analytics (Alteryx, SQL, Tableau), and you have found the right team. As a BI Developer Senior Associate within the Asset and Wealth Management Finance Transformation and Analytics team, you will spend each day defining, refining, and delivering set goals for our firm. **Key Responsibilities:** - Design the technical and information architecture for the MIS (DataMarts) and Reporting Environments. - Focus on data modeling and database design for the AWM LOB. - Support the MIS team in query optimization, and deployment of BI technologies including but not limited to Alteryx, Tableau, Databricks, MS SQL Server (T SQL programming) /SSIS and SSRS. - Design and develop complex dashboards from large and/or different data sets. - Scope, prioritize and coordinate activities with the product owners. - Partner with technology teams to identify solutions required to establish a robust MIS environment. - Design and develop complex queries which cater to data inputs for the dashboards/reports from large data sets. - Work on the agile improvements by sharing experiences and knowledge with the team. Advocate and steer the team to implement CI/CD (DevOps) workflow. - Overall, the ideal candidate for this position will be highly skilled in reporting methodologies, data manipulation & analytics tools and have expertise in the visualization and presentation of enterprise data. **Qualifications Required:** - Bachelor's Degree in MIS, Computer Science, or Engineering. Different fields of study, with significant professional experience in BI Development, are acceptable. - 12+ years of experience in Data warehousing, ETL, and visualization. - Strong work experience in data wrangling tools like Alteryx. - Working proficiency in Data Visualization Tools. Experience with BI technologies including Alteryx, Tableau, MS SQL Server (SSIS, SSRS), Databricks, ThoughtSpot. - Working knowledge of querying data from databases such as MS SQL Server, Snowflake, Databricks, etc. - Strong knowledge of designing database architecture and building scalable visualization solutions. Ability to write complicated yet efficient SQL queries and stored procedures. - Experience in building end-to-end ETL processes. - Experience in working with multiple data sources and handling large volumes of data. - Experience in the conversion of data into information. - Experience in the end-to-end implementation of Business Intelligence (BI) reports & dashboards. - Good communication and analytical skills. The job description does not provide any additional details about the company.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
chennai, tamil nadu
On-site
Role Overview: As a NetSuite Analytics Warehouse (NSAW) Solution Architect at BONbLOC, your primary responsibility will be to design, implement, and optimize cloud-based analytics and reporting solutions. You will be expected to lead the architectural design of NSAW solutions, ensuring alignment with business objectives, scalability, and performance. Your deep knowledge of NetSuite ERP data structures, data modeling, and integration with analytics platforms will be crucial for this role. Key Responsibilities: - Develop the end-to-end architecture for NSAW implementations, ensuring scalability, performance, and data integrity. - Define the data model and semantic layer to optimize reporting and analytics capabilities. - Align NSAW solutions with business objectives, compliance standards, and best practices. - Oversee the extraction, transformation, and loading (ETL) processes from NetSuite ERP and other data sources into NSAW. - Collaborate with technical teams to configure and customize NSAW for business-specific needs. - Integrate NSAW with Oracle Analytics Cloud (OAC) and other BI tools where required. - Define and implement data governance frameworks, ensuring data quality, consistency, and compliance. - Establish and enforce security protocols, user roles, and access controls for NSAW. - Work closely with business analysts, developers, and end-users to gather requirements and deliver analytics solutions. - Provide technical guidance and mentorship to development and analytics teams. - Engage with stakeholders to ensure adoption and effective use of NSAW solutions. - Troubleshoot performance issues, integration challenges, and user-reported errors. - Stay updated with Oracle NetSuite and analytics industry trends, recommending improvements where relevant. Qualifications Required: - Bachelors degree in Computer Science, Information Systems, Data Analytics, or related field (Masters preferred). - 5+ years of experience in BI/analytics architecture, with at least 2+ years working with NSAW. - Strong knowledge of NetSuite ERP data structures, schema, and integration methods. - Proven expertise in data modeling, ETL processes, and semantic modeling. - Experience with Oracle Analytics Cloud (OAC), BI Publisher, or similar reporting tools. - Knowledge of SQL, data mapping, and API integrations. - Strong problem-solving skills and ability to manage multiple projects simultaneously. - Excellent communication and stakeholder management skills. Additional Company Details (Optional): BONbLOC is a 5-year-old, fast-growing software and services company that focuses on building SaaS solutions using Blockchain, Data Science, and IoT technologies. The company is dedicated to providing offshore/onsite support to large customers in their IT modernization efforts. BONbLOC's mission is to build simple, scalable solutions using innovative technologies to enable customers to realize unprecedented business value. Its core values include Integrity, Collaboration, Innovation, and Excellence.,
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
kochi, kerala
On-site
As a Data Science Programs Lead Associate Director at EY, you will be leading the Data Engineering and Data Science pillars within the Global Analytics team in the Global Finance function. Your role will involve scoping, exploring, and delivering advanced data science and machine learning projects to drive growth and create efficiencies for the firm. You will work closely with the Global Analytics team lead and other pillar leads to ensure that the best data and analytics-driven insights are used in strategic and operational decision-making across EY. Your key responsibilities will include: - Supporting and collaborating with the team to deliver advanced data science and ML projects - Using agile best practices to manage team priorities and workflow across multiple projects - Ensuring high standards are maintained in terms of your team's work, including validation, testing, and release management - Being a thought leader and driving innovation across the team - Coaching and developing the team to ensure they have the right skills to succeed - Communicating developments to stakeholders in a clear and relevant manner To qualify for this role, you must have: - An excellent track record in leading teams to develop data science and machine learning solutions - The ability to build trust with key stakeholders and explain analysis in a visual story - Experience in proactive innovation and creating new solutions to meet business requirements - Strong experience in creating and managing automated ETL processes for machine learning pipelines - Practical experience in performing exploratory analytics and creating data science pipelines using Python and SQL Ideally, you will also have experience with Graph databases, metadata management, LLMs using vectorized and structured data, and PowerBI. At EY, you will have the opportunity to work in an inclusive environment that values flexible working arrangements. You will be rewarded with a competitive remuneration package and comprehensive Total Rewards, including support for career development and flexible working. EY offers support, coaching, and feedback from engaging colleagues, opportunities to develop new skills, and the freedom to handle your role in a way that suits you. EY is committed to building a better working world by creating long-term value for clients, people, and society. Through data and technology, EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate across various sectors.,
Posted 1 day ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
In the role overview, you will be a part of the business application consulting team at PwC, where you will specialize in providing consulting services for various business applications. Your main focus will be on helping clients optimize their operational efficiency by analyzing their needs, implementing software solutions, and providing training and support for seamless integration and utilization of business applications. Specifically in SAP technology, you will be responsible for tasks such as installation, configuration, administration, development, and support of SAP products and technologies. Key Responsibilities: - Analyze client needs and implement software solutions to optimize operational efficiency - Provide training and support for seamless integration and utilization of business applications - Specialize in utilizing and managing SAP software and solutions within an organization - Responsible for tasks such as installation, configuration, administration, development, and support of SAP products and technologies Qualifications Required: - Bachelor's or Master's degree in a relevant field (e.g., computer science, information systems, engineering) - Minimum of 6 years of experience in HANA Native development and configurations, including at least 1 year with SAP BTP Cloud Foundry and HANA Cloud - Demonstrated experience in working with various data sources SAP (SAP ECC, SAP CRM, SAP S/4HANA) and non-SAP (Oracle, Salesforce, AWS) - Strong focus on building expertise in constructing calculation views within the HANA Cloud environment (BAS) and other supporting data artifacts - Certification in SAP HANA or related areas is a plus In this role at PwC, you will be expected to apply a learning mindset, appreciate diverse perspectives, sustain high performance habits, actively listen, seek feedback, and gather information from various sources to analyze facts. You will also be required to commit to understanding how the business works, build commercial awareness, and learn and apply professional and technical standards. Your ability to take ownership, deliver quality work, and drive value for clients will be crucial for success in this role.,
Posted 1 day ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Role Overview: You will be responsible for designing, implementing, and maintaining ETL processes using ADF and ADB. Your role will involve creating and managing views in ADB and SQL to ensure efficient data access. Additionally, you will optimize SQL queries for large datasets and high performance. Conducting end-to-end testing and impact analysis on data pipelines will also be a part of your responsibilities. Key Responsibilities: - Identify and resolve bottlenecks in data processing to ensure smooth operation of data pipelines. - Optimize SQL queries and Delta Tables to achieve fast data processing. - Implement data sharing methods such as Delta Share, SQL Endpoints, and utilize Delta Tables for efficient data sharing and processing. - Integrate external systems through Databricks Notebooks and build scalable solutions. Experience in building APIs is considered a plus. - Collaborate with teams to understand requirements and design solutions effectively. - Provide documentation for data processes and architectures to ensure clarity and transparency. Qualifications Required: - Proficiency in ETL processes using ADF and ADB. - Strong SQL skills with the ability to optimize queries for performance. - Experience in data pipeline optimization and performance tuning. - Knowledge of data sharing methods like Delta Share and SQL Endpoints. - Ability to integrate external systems and build APIs using Databricks Notebooks. - Excellent collaboration skills and the ability to document data processes effectively.,
Posted 1 day ago
3.0 - 5.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Job Summary We are seeking a highly skilled and passionate GenAI & Data Science Engineer with 3-5 years of experience in Python development, Generative AI, and Data Science. The ideal candidate will have a strong background in AI agent workflows, LLM fine-tuning, and Retrieval-Augmented Generation (RAG) models. You will play a key role in designing, developing, and deploying cutting-edge AI solutions using frameworks such as Lang Chain, Llama Index, and Hugging Face. This role offers the opportunity to work on transformative AI-driven solutions, leveraging state-of-the-art tools and frameworks to create impactful solutions in real-world applications. Key Responsibilities Design, develop, and deploy AI solutions with a focus on Generative AI and Data Science. Fine-tune Large Language Models (LLM) and implement Retrieval-Augmented Generation (RAG) models. Collaborate with cross-functional teams to integrate AI models into business workflows. Utilize frameworks such as Lang Chain, Llama Index, and Hugging Face to build scalable AI solutions. Participate in end-to-end AI model development, including data preprocessing, model selection, training, evaluation, and deployment. Continuously monitor and optimize the performance of AI models to ensure they meet business requirements. Work with stakeholders to understand AI requirements and contribute to solution design and architecture. Stay up to date with the latest advancements in AI technologies and industry trends. Qualifications Bachelors or Masters degree in Computer Science, Data Science, AI, or a related field. 3-5 years of professional experience in Python development, AI, and Data Science. Proven experience with Generative AI, including fine-tuning LLMs and working with RAG models. Hands-on experience with frameworks like Lang Chain, Llama Index, and Hugging Face. Strong understanding of machine learning algorithms, deep learning, and natural language processing (NLP). Experience in AI model deployment and scaling in production environments. Technical Skills Programming: Python, including libraries like TensorFlow, PyTorch, Pandas, NumPy, etc. AI/ML Frameworks: Lang Chain, Llama Index, Hugging Face, etc. Machine Learning Algorithms: Supervised and Unsupervised Learning, NLP, Reinforcement Learning. Data Engineering: Data preprocessing, data wrangling, ETL processes. Databricks experience. Cloud Platforms: AWS, GCP, Azure (experience with AI tools on cloud platforms). Version Control: Git, GitHub, GitLab. Familiarity with containerization tools like Docker and Kubernetes. Soft Skills Strong problem-solving skills and analytical thinking. Excellent communication and collaboration skills. Ability to work independently and as part of a team. Adaptability to evolving technologies and requirements. Strong attention to detail and high quality of work. Time management and ability to meet deadlines. Work Experience 3-5 years of experience working in AI, Data Science, or a related field. Practical experience in working with Generative AI, LLM fine-tuning, and RAG models. Experience with deployment of AI models in cloud environments. Proven track record delivering AI-driven solutions to solve real business problems. Good to Have Experience with other AI tools and frameworks like OpenAI GPT, DeepPavlov, or similar. Exposure to data integration and API development. Knowledge of advanced topics in NLP, such as transformers and attention mechanisms. Experience with building AI-powered applications or chatbots. Compensation & Benefits Salary: Competitive base salary based on experience and skills. Bonus: Annual performance-based bonus. Benefits: Health insurance, paid time off, work-from-home options, and retirement benefits. Learning & Development: Access to AI and Data Science training, conferences, and certifications. Key Performance Indicators (KPIs) & Key Result Areas (KRAs) KPIs Timely delivery of AI projects and solutions. Quality and accuracy of fine-tuned AI models. Successful integration of AI solutions into business workflows. Continuous improvement in AI model performance (accuracy, speed, scalability). Stakeholder satisfaction and feedback on AI-driven solutions. Contribution to knowledge sharing and team collaboration. KRAs AI model development, fine-tuning, and deployment. End-to-end ownership of AI solution delivery. Collaboration with cross-functional teams to define and implement business requirements. Optimization and monitoring of AI solutions in production environments. Contact: [HIDDEN TEXT] Show more Show less
Posted 1 day ago
5.0 - 9.0 years
0 Lacs
coimbatore, tamil nadu
On-site
As a NetSuite Analytics Developer & Data Warehousing expert at Mindspark Technologies, your role will involve designing, building, and optimizing NetSuite analytics solutions and enterprise data warehouses. You will be leveraging NetSuite's SuiteAnalytics tools along with external data warehousing platforms such as Oracle Analytics Warehouse, Google Cloud Platform (GCP), and Snowflake to deliver scalable, data-driven insights through advanced visualizations and reporting across the organization. Key Responsibilities: - Design, develop, and maintain SuiteAnalytics reports, saved searches, and dashboards within NetSuite to meet evolving business needs. - Build and optimize data pipelines and ETL processes to integrate NetSuite data into enterprise data warehouses (e.g., Oracle Analytics Warehouse, Snowflake, BigQuery). - Develop data models, schemas, and maintain data marts to support business intelligence and analytical requirements. - Implement advanced visualizations and reports using tools such as Tableau, Power BI, or Looker, ensuring high performance and usability. - Collaborate with business stakeholders to gather requirements and translate them into effective technical solutions. - Monitor, troubleshoot, and optimize data flow and reporting performance. - Ensure data governance, security, and quality standards are upheld across analytics and reporting systems. - Provide documentation, training, and support to end-users on analytics solutions. Qualifications Required: - Bachelor's degree in Computer Science, Information Systems, or related field. - 5+ years of experience working with NetSuite, including SuiteAnalytics (saved searches, datasets, workbooks). - Strong expertise in data warehousing concepts, ETL processes, and data modeling. - Hands-on experience with external data warehouse platforms such as Oracle Analytics Warehouse, GCP (BigQuery), or Snowflake. - Proficient in SQL and performance optimization of complex queries. - Experience with BI and visualization tools like Tableau, Power BI, or Looker. - Understanding of data governance, compliance, and best practices in data security. Please note that the company details were not provided in the job description.,
Posted 2 days ago
8.0 - 12.0 years
0 Lacs
vadodara, gujarat
On-site
As a Data Migration Specialist at Hitachi Energy, you will play a crucial role in developing and executing comprehensive data migration strategies. Your responsibilities will include: - Developing and executing comprehensive data migration strategies, including data mapping, cleansing, and validation. - Analyzing legacy systems to identify migration requirements and challenges. - Designing and implementing ETL processes using SAP BusinessObjects Data Services (BODS). - Optimizing and troubleshooting BODS jobs for performance and reliability. - Providing functional expertise in SAP SD, MM, and PP modules to ensure accurate data alignment. - Driving data quality improvement initiatives to enhance business process efficiency and analytics. - Conducting data validation, reconciliation, and testing to ensure data integrity. - Collaborating with cross-functional teams and stakeholders, ensuring clear communication and documentation. Your qualifications should include: - A Bachelor's degree in Computer Science, IT, or a related field. - 8+ years of experience in SAP data migration, including SAP S/4HANA (preferred). - Proficiency in SAP BusinessObjects Data Services (BODS) and data migration tools. - Strong functional knowledge of SAP SD, MM, and PP modules. - Skills in data analysis, cleansing, and transformation techniques. - Excellent problem-solving, analytical, and communication skills. - Ability to work independently and collaboratively in team environments. - SAP certification and project management experience are a plus. Hitachi Energy is a global technology leader in electrification, powering a sustainable energy future through innovative power grid technologies with digital at the core. Over three billion people depend on our technologies to power their daily lives. With over a century in pioneering mission-critical technologies like high-voltage, transformers, automation, and power electronics, we are addressing the most urgent energy challenge of our time - balancing soaring electricity demand while decarbonizing the power system. Headquartered in Switzerland, we employ over 50,000 people in 60 countries and generate revenues of around $16 billion USD. Apply today to join our team.,
Posted 2 days ago
10.0 - 12.0 years
0 Lacs
india
On-site
Job Description About the Role We are seeking a highly experienced Senior Data Analyst with over 10 years of expertise in data analytics, database architecture, and business intelligence. This role demands deep technical proficiency in SQL, Power BI, and SSIS, along with a strong understanding of database schema design and data warehouse architecture. You will play a key role in transforming complex data into actionable insights and scalable solutions that drive business performance. Key Responsibilities Design and maintain robust database schemas to support scalable and efficient data storage and retrieval. Develop and optimize data warehouse solutions, including star and snowflake schema models, fact and dimension tables, and ETL pipelines. Build and manage Power BI dashboards and reports that deliver clear, actionable insights to stakeholders. Design, develop, and maintain SSIS packages for robust ETL processes across multiple data sources. Monitor and troubleshoot data integration workflows to ensure data quality and reliability. Collaborate with cross-functional teams to define data requirements and translate business needs into technical solutions. Lead data modeling efforts and enforce best practices in schema normalization, indexing, and partitioning. Automate and orchestrate data workflows using tools like SSIS and scheduled jobs to improve operational efficiency. Mentor junior analysts and contribute to the development of data governance and documentation standards. Ensure compliance with data security and privacy regulations across all data platforms. Qualifications Required Skills & Qualifications 10+ years of experience in data analytics, data engineering, or business intelligence roles. Expert-level proficiency in SQL, including advanced query optimization, stored procedures, and schema design. ? Strong hands-on experience with Power BI, including DAX, Power Query, and data modeling. Proficiency in SSIS for building and managing ETL pipelines. Deep understanding of data warehouse concepts, including OLAP vs. OLTP, ETL processes, and data lake vs. data mart vs. warehouse architectures. Experience designing and managing star and snowflake schemas, including implementation of surrogate keys, slowly changing dimensions (SCDs), and maintaining data lineage. Excellent analytical, problem-solving, and communication skills. Preferred Qualifications Experience with business intelligence tools beyond Power BI. Familiarity with data governance frameworks and metadata management. Understanding of Agile methodologies and experience in sprint-based delivery environments. Exposure to cloud-based data platforms (e.g., Azure Synapse, Snowflake, BigQuery). Experience with CI/CD practices for data solutions. Knowledge of Python for data analysis and automation is a plus. Additional Information About NielsenIQ NielsenIQ is a global measurement and data analytics company that provides the most complete and trusted view available of consumers and markets worldwide. We provide consumer packaged goods manufacturers/fast-moving consumer goods and retailers with accurate, actionable information and insights and a complete picture of the complex and changing marketplace that companies need to innovate and grow. Our approach marries proprietary NielsenIQ data with other data sources to help clients around the world understand what's happening now, what's happening next, and how to best act on this knowledge. We like to be in the middle of the action. That's why you can find us at work in over 90 countries, covering more than 90% of the world's population. For more information, visit www.niq.com. NielsenIQ is committed to hiring and retaining a diverse workforce. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world's leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights-delivered with advanced analytics through state-of-the-art platforms-NIQ delivers the Full View. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world's population. For more information, visit NIQ.com Want to keep up with our latest updates Follow us on: | | | Our commitment to Diversity, Equity, and Inclusion At NIQ, we are steadfast in our commitment to fostering an inclusive workplace that mirrors the rich diversity of the communities and markets we serve. We believe that embracing a wide range of perspectives drives innovation and excellence. All employment decisions at NIQ are made without regard to race, color, religion, sex (including pregnancy, sexual orientation, or gender identity), national origin, age, disability, genetic information, marital status, veteran status, or any other characteristic protected by applicable laws. We invite individuals who share our dedication to inclusivity and equity to join us in making a meaningful impact. To learn more about our ongoing efforts in diversity and inclusion, please visit the
Posted 2 days ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
As a Managed Services Senior Associate at Huron, you will play a crucial role in delivering exceptional quality service to Oracle Cloud clients. Your expertise in Oracle Technical aspects will be key in addressing client concerns and ensuring customer satisfaction. Here's what your role will entail: **Role Overview:** You will contribute to the delivery of managed services for Oracle Cloud clients by providing support to the managed services team and addressing client concerns. Your strong problem-solving abilities and commitment to customer satisfaction will be essential in delivering exceptional quality service. **Key Responsibilities:** - Proven expertise and hands-on experience on incidents and service requests related to Oracle reporting tools such as BI Publisher, OTBI, and OAC. - Designing complex reports and dashboards with proficiency in data modeling, warehousing, and ETL processes. - Proficiency in querying and manipulating data within Oracle databases using SQL and PL/SQL. - General reporting on HCM domain by fetching data from HCM tables. - Hands-on technical experience in Finance, SCM, PPM modules. - Understanding of Oracle HCM modules, Fast Formulas, HCM extracts, and broader Oracle modules for effective integration and support. - Experience with data modeling, data warehousing, and ETL processes to support reporting and analytics requirements. - Expertise in Oracle Integration Cloud (OIC) & Visual Builder Studio (Redwood pages) for customizing Redwood pages and developing responsive web/mobile apps. **Qualifications Required:** - Bachelor's degree in Information Technology, Business Administration, or a related field. - Minimum of 6 years of experience in Oracle support services or similar roles. - Strong problem-solving and analytical skills. - Excellent communication and interpersonal skills. - Ability to work collaboratively in a team environment. - Strong organizational and time management skills. - Customer-focused mindset with a commitment to delivering high-quality service. - Knowledge of ITIL or other service management frameworks. Join Huron as a Managed Services Senior Associate and leverage your technical expertise to contribute to the growth and profitability of the organization while exceeding client expectations.,
Posted 2 days ago
0.0 - 2.0 years
0 - 1 Lacs
bhubaneswar, odisha, india
On-site
Description We are seeking a motivated SQL Developer to join our team in India. The ideal candidate will have 0-2 years of experience in SQL development and a passion for working with data. You will be responsible for designing and implementing database systems, writing complex queries, and optimizing performance. Responsibilities Designing and implementing database systems using SQL. Writing complex SQL queries to extract and manipulate data. Optimizing SQL queries for performance and efficiency. Collaborating with the development team to integrate databases with applications. Monitoring database performance and troubleshooting issues. Creating and maintaining documentation for database systems. Skills and Qualifications Proficiency in SQL and relational database management systems (RDBMS) like MySQL, PostgreSQL, or SQL Server. Understanding of database design principles and normalization. Familiarity with data modeling and ETL processes. Basic knowledge of programming languages such as Python or Java is a plus. Strong analytical and problem-solving skills. Ability to work collaboratively in a team environment.
Posted 2 days ago
3.0 - 7.0 years
3 - 5 Lacs
remote, india
On-site
Experience in Developing Data Pipelines that process large volumes of data using Python, PySpark, Pandas etc, on AWS / Azure Experience in developing ETL, OLAP based and Analytical Applications. Experience in ingesting batch and streaming data from various data sources. Strong Experience in writing complex SQL using any RDBMS (Oracle, PostgreSQL, SQL Server etc) Ability to quickly learn and develop expertise in existing highly complex applications and architectures. Exposure to AWS platforms data services (AWS Lambda, Glue, Athena, Redshift, Kinesis etc) Proficiency in Azure technologies such as Azure Data Factory (ADF), Azure Data Bricks (ADB),Azure Synapse Analytics, Azure Active Directory, Azure Storage, Azure data Lake Services (ADLS), Azure key vault, Azure SQL DB, Azure HD Insight. Experience in Airflow DAGS, AWS EMR, S3, IAM and other services Snowflake or Redshift data warehouses Experience of DevOps and CD/CD tools. Familiarity with Rest APIs Clear and precise communication skills Experience with CI/CD pipelines, branching strategies, & GIT for code management Comfortable working in Agile projects
Posted 3 days ago
3.0 - 7.0 years
3 - 5 Lacs
remote, india
On-site
Key Responsibilities: Data Pipeline Development Design and develop scalable data pipelines using Python, PySpark, and Pandas Process and transform large volumes of data on cloud platforms (AWS / Azure) Ingest both batch and streaming data from diverse sources ETL and Analytical Applications Build and maintain ETL, OLAP-based, and analytical applications Write and optimize complex SQL queries across various RDBMS systems (Oracle, PostgreSQL, SQL Server) Cloud Services Expertise Work with AWS services such as Lambda, Glue, Redshift, Athena, Kinesis, EMR, IAM, and S3 Use Azure services including ADF, Databricks, Synapse Analytics, Azure SQL DB, ADLS, Azure Storage, and HDInsight Manage security and identity with Azure Key Vault and Azure Active Directory Data Warehousing Experience with cloud data warehouses such as Snowflake or Amazon Redshift DevOps and Automation Build and manage CI/CD pipelines and code repositories using Git and branching strategies Develop and maintain Airflow DAGs and work with infrastructure-as-code principles Automate deployments using modern DevOps tools Integration and APIs Integrate systems and data using REST APIs Collaborate with cross-functional teams in Agile environments Documentation and Communication Communicate clearly with technical and non-technical stakeholders Document processes, pipelines, and architecture effectively
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
thiruvananthapuram, kerala
On-site
You will be responsible for the following key responsibilities: - Designing and implementing end-to-end test strategies for data migration across legacy and modern eCommerce platforms - Validating data integrity, transformation logic, and system compatibility - Performing source-to-target data validation using ETL tools, and custom scripts - Collaborating with cross-functional teams to understand business rules and transformation logic - Conducting functional, regression, integration, and automation testing for migrated data - Leading testing efforts for SAP Commerce Cloud and Adobe Experience Manager backend configurations - Developing and maintaining automated test suites using tools like Playwright, Selenium, or Cypress - Supporting UAT and assisting business users in validating migrated data - Documenting test results, issues, and resolutions - Ensuring compliance with data governance, privacy, and security standards - Providing input into automation strategy and CI/CD integration for data validation You should possess the following essential skills and experience: - 5+ years of experience in software testing, with at least 2 years in data migration projects - Proven experience in eCommerce platform migration, especially SAP and Adobe - Strong knowledge of SQL, ETL processes, and data profiling tools - Experience with automation testing frameworks (e.g., Playwright, Selenium, Cypress) - Understanding of data structures, data modeling, and data quality frameworks - Excellent analytical and problem-solving skills - Strong communication and stakeholder management abilities - ISTQB or equivalent testing certification preferred,
Posted 3 days ago
1.0 - 5.0 years
0 Lacs
karnataka
On-site
Role Overview: You will be responsible for designing, building, and deploying Generative AI models using foundational models like GPT, BERT, LLaMA, PaLM, etc. Additionally, you will develop scalable GenAI applications, integrate them with enterprise systems using APIs and SDKs, and fine-tune large language models (LLMs) for domain-specific use cases. You will also design, implement, and manage end-to-end machine learning pipelines on Microsoft Azure, collaborate with data scientists to productionize ML models, and automate deployment and monitoring of models using CI/CD pipelines and Azure DevOps tools. Furthermore, you will ensure security, compliance, and governance of ML workflows and data, troubleshoot and optimize ML workflows, and document architecture, processes, and operational procedures. Key Responsibilities: - Design, build, and deploy Generative AI models using models like GPT, BERT, LLaMA, PaLM - Develop scalable GenAI applications and integrate with enterprise systems using APIs and SDKs - Fine-tune and optimize large language models (LLMs) for domain-specific use cases - Design, implement, and manage end-to-end machine learning pipelines on Microsoft Azure - Collaborate with data scientists to productionize ML models using best practices in Azure MLOps - Automate the deployment and monitoring of models using CI/CD pipelines and Azure DevOps tools - Implement scalable model training, validation, and deployment workflows in the cloud - Monitor model performance in production, retrain models as needed, and ensure security, compliance, and governance of ML workflows and data - Develop Python scripts and tools to automate tasks, troubleshoot and optimize ML workflows, and document architecture, processes, and operational procedures Qualifications Required: - Hands-on experience with transformer-based models (e.g., GPT, BERT, LLaMA, etc.) - Familiarity with tools like LangChain, LlamaIndex, Haystack, etc. - Experience in prompt engineering, retrieval-augmented generation (RAG), and model fine-tuning - Proven experience in MLOps, specifically with Azure services - Strong programming skills in Python - Experience with Hugging Face Transformers, PyTorch or TensorFlow - REST APIs and/or gRPC for model integration - Experience with Azure Databricks, Azure Machine Learning, Azure OpenAI - Familiarity with ML libraries (scikit-learn, TensorFlow, PyTorch) - Experience building and managing CI/CD pipelines for ML models using Azure DevOps or equivalent tools - Building REST APIs for ML inference using frameworks like FastAPI or Flask - Understanding of containerization technologies like Docker and orchestration using Kubernetes - Knowledge of machine learning lifecycle management, model versioning, and deployment strategies - Experience with data engineering, data pipelines, and ETL processes on Azure - Familiarity with monitoring tools and logging frameworks for production systems - Strong problem-solving skills and ability to work in a collaborative, fast-paced environment,
Posted 3 days ago
5.0 - 9.0 years
0 Lacs
telangana
On-site
As a Data Engineer at our company, you will play a crucial role in managing and optimizing data processes. Your responsibilities will include: - Designing and developing data pipelines using Python programming - Leveraging GCP services such as Dataflow, Dataproc, BigQuery, Cloud Storage, and Cloud Functions - Implementing data warehousing concepts and technologies - Performing data modeling and ETL processes - Ensuring data quality and adhering to data governance principles To excel in this role, you should possess the following qualifications: - Bachelor's degree in Computer Science, Engineering, or a related field - 5-7 years of experience in data engineering - Proficiency in Python programming - Extensive experience with GCP services - Familiarity with data warehousing and ETL processes - Strong understanding of SQL and database technologies - Experience in data quality and governance - Excellent problem-solving and analytical skills - Strong communication and collaboration abilities - Ability to work independently and in a team environment - Familiarity with version control systems like Git If you are looking to join a dynamic team and work on cutting-edge data projects, this position is perfect for you.,
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
Role Overview: Join EY as a Senior Master Data Management professional and become a part of a dynamic team that is dedicated to building a better working world. By leveraging your expertise in Master Data Management (MDM) and Extract Transform Load (ETL) processes, you will play a crucial role in delivering exceptional value to EY clients in the APS (Advanced Planning Solutions) implementation role for supply chain projects. Key Responsibilities: - Work as a solution consultant on APS implementation projects, contributing to high-performing engagement teams to deliver value for EY clients - Lead the master data workstream to ensure the maintenance of quality master data in source systems - Derive master data quality rules by understanding the functional side of planning processes for rule-based master data validation - Deliver high-quality work within expected timeframes and budget, proactively identify risks, and keep stakeholders informed about progress and expected outcomes - Build relationships internally and with client personnel to deliver quality client services and drive deep business insights - Continually evaluate EY's service lines and capabilities to incorporate them for additional client value Qualifications Required: - Bachelor's degree (B. Tech., BCA etc) and/or master's degree (MBA) from a reputed college - 4-8 years of experience in data-related disciplines, preferably in a Supply Chain environment - Strong techno-functional experiences with SAP ECC and associated data structures - Hands-on experience with an MDM tool like Rulex, Informatica, SAP MDG, Collibra etc. - Experience in handling large data sets using tools like MS-Excel, MySQL etc. - Experience of working with cross-functional teams will be an add-on - Exposure to industries such as Consumer Products, Life Sciences, Hitech and Electronics, Industrial Products, process Industries like Chemicals, Oil & Gas Additional Company Details (if present): At EY, you will have the opportunity to build a career tailored to your uniqueness, with global support, an inclusive culture, and cutting-edge technology to help you become the best version of yourself. Your voice and perspective are valued to contribute towards making EY even better. Join EY to create an exceptional experience for yourself and contribute to building a better working world for all. Note: The job description also emphasizes EY's mission to build a better working world, create long-term value for clients, people, and society, and build trust in the capital markets. EY's diverse teams in over 150 countries provide trust through assurance and help clients grow and transform using data and technology across various service lines.,
Posted 3 days ago
4.0 - 8.0 years
0 Lacs
maharashtra
On-site
As an experienced professional with over 5 years of experience, you have a strong background in development & support activities in live projects. Your expertise includes the following mandatory skills for screening: - Cognos TM1 (Planning Analytics) - TM1 Rules and Feeders - ETL Processes - Performance Optimization - Data Integration It would be beneficial if you also have experience with the following additional skills: - Cognos Analytics - SQL Queries In this role, your responsibilities will include: - Building, maintaining & enhancing financial systems (Planning Analytics) for budgeting, forecasting, and Actual MIS reporting. - Developing objects in TM1/Planning Analytics such as cubes, rules, dimensions, MDX expressions, TI processes, and web sheets. - Creating and administering security at all levels including client-groups, cubes, dimensions, and processes. - Analyzing and resolving data and technical issues raised by users. - Demonstrating strong knowledge in Business Processes (specifically Manufacturing domain) and data warehousing concepts. - Utilizing experience with Planning Analytics Workspace and Planning Analytics for Excel. These responsibilities will require a high level of expertise and a proactive approach to problem-solving.,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
ahmedabad, gujarat
On-site
As an entrepreneurial, passionate, and driven Data Engineer at Startup Gala Intelligence backed by Navneet Tech Venture, you will play a crucial role in shaping the technology vision, architecture, and engineering culture of the company right from the beginning. Your contributions will be foundational in developing best practices and establishing the engineering team. **Key Responsibilities:** - **Web Scraping & Crawling:** Build and maintain automated scrapers to extract structured and unstructured data from websites, APIs, and public datasets. - **Scalable Scraping Systems:** Develop multi-threaded, distributed crawlers capable of handling high-volume data collection without interruptions. - **Data Parsing & Cleaning:** Normalize scraped data, remove noise, and ensure consistency before passing to data pipelines. - **Anti-bot & Evasion Tactics:** Implement proxy rotation, captcha solving, and request throttling techniques to handle scraping restrictions. - **Integration with Pipelines:** Deliver clean, structured datasets into NoSQL stores and ETL pipelines for further enrichment and graph-based storage. - **Data Quality & Validation:** Ensure data accuracy, deduplicate records, and maintain a trust scoring system for data confidence. - **Documentation & Maintenance:** Keep scrapers updated when websites change, and document scraping logic for reproducibility. **Qualifications Required:** - 2+ years of experience in web scraping, crawling, or data collection. - Strong proficiency in Python (libraries like BeautifulSoup, Scrapy, Selenium, Playwright, Requests). - Familiarity with NoSQL databases (MongoDB, DynamoDB) and data serialization formats (JSON, CSV, Parquet). - Experience in handling large-scale scraping with proxy management and rate-limiting. - Basic knowledge of ETL processes and integration with data pipelines. - Exposure to graph databases (Neo4j) is a plus. As part of Gala Intelligence, you will be working in a tech-driven startup dedicated to solving fraud detection and prevention challenges. The company values transparency, collaboration, and individual ownership, creating an environment where talented individuals can thrive and contribute to impactful solutions. If you are someone who enjoys early-stage challenges, thrives on owning the entire tech stack, and is passionate about building innovative, scalable solutions, we encourage you to apply. Join us in leveraging technology to combat fraud and make a meaningful impact from day one.,
Posted 3 days ago
1.0 - 5.0 years
0 Lacs
ahmedabad, gujarat
On-site
As a Database Engineer with 1 to 3 years of experience, your role will involve designing, optimizing, and managing databases to support AI-driven applications. You will work closely with AI/Engineering teams to ensure the scalability, security, and performance of databases while maintaining data integrity. Key Responsibilities: - Design, develop, and maintain scalable database architectures using SQL & NoSQL technologies. - Optimize database queries, indexing, and performance tuning for efficient operations. - Ensure data security, compliance, backup, and disaster recovery procedures are in place. - Build and manage ETL pipelines to support AI/ML and analytics workflows. - Collaborate with AI engineers on vector databases and RAG pipelines integration. - Integrate databases with APIs, CRMs, and external systems for seamless data flow. - Monitor database health, troubleshoot issues, and enhance reliability for smooth operations. Required Skills & Experience: - Proficiency in SQL databases such as PostgreSQL, MySQL, MSSQL, etc. - Experience with NoSQL databases like MongoDB, Cassandra, DynamoDB, etc. - Knowledge of vector databases (Pinecone, Weaviate, Chroma, FAISS) would be advantageous. - Expertise in database optimization, indexing, and partitioning techniques. - Understanding of ETL processes, data pipelines, and data modeling. - Familiarity with cloud database services like AWS RDS, GCP Cloud SQL, Azure SQL. - Ability to work with backup, replication, and high availability strategies. - Strong Python/SQL scripting skills for automation tasks. Nice-to-Have: - Exposure to data warehouses such as Snowflake, BigQuery, Redshift. - Knowledge of streaming data systems like Kafka, Spark. - Hands-on experience with AI/ML data pipelines. - Familiarity with DevOps tools for databases (Terraform, Kubernetes, CI/CD for DB). - Previous involvement with enterprise-scale databases handling high-volume data. What We Offer: - Competitive salary & performance-based incentives. - Opportunity to work on cutting-edge AI-driven data ecosystems. - Collaborative environment with AI and Data Engineering teams. - Flexibility in remote work and project ownership. Location: On-site (Ahmedabad, Gujarat) Employment Type: Full-time,
Posted 3 days ago
2.0 - 6.0 years
0 Lacs
jaipur, rajasthan
On-site
As a Data Engineer with Fabric, you will be responsible for designing, developing, and maintaining data pipelines and infrastructure to ensure accurate, timely, and accessible data for driving data-driven decision-making and supporting company growth. Key Responsibilities: - Design, develop, and implement data pipelines using Azure Data Factory and Databricks for ingestion, transformation, and movement of data. - Develop and optimize ETL processes to ensure efficient data flow and transformation. - Maintain Azure Data Lake solutions for efficient storage and retrieval of large datasets. - Build and manage scalable data warehousing solutions using Azure Synapse Analytics for advanced analytics and reporting. - Integrate various data sources into MS-Fabric to ensure data consistency, quality, and accessibility. - Optimize data processing workflows and storage solutions to improve performance and reduce costs. - Manage and optimize SQL and NoSQL databases to support high-performance queries and data storage requirements. - Implement data quality checks and monitoring to ensure accuracy and consistency of data. - Collaborate with data scientists, analysts, and stakeholders to understand data requirements and deliver actionable insights. - Create and maintain comprehensive documentation for data processes, pipelines, infrastructure, architecture, and best practices. - Identify and resolve issues in data pipelines, data lakes, and warehousing solutions, providing timely support and maintenance. Qualifications: - Experience: 2-4 years of experience in data engineering or a related field. - Technical Skills: - Proficiency with Azure Data Factory, Azure Synapse Analytics, Databricks, and Azure Data Lake. - Experience with Microsoft Fabric is a plus. - Strong SQL skills and experience with data warehousing concepts (DWH). - Knowledge of data modeling, ETL processes, and data integration. - Hands-on experience with ETL tools and frameworks (e.g., Apache Airflow, Talend). - Knowledge of big data technologies (e.g., Hadoop, Spark) is a plus. - Familiarity with cloud platforms (e.g., AWS, Azure, Google Cloud) and associated data services (e.g., S3, Redshift, BigQuery). - Familiarity with data visualization tools (e.g., Power BI) and experience with programming languages such as Python, Java, or Scala. - Experience with schema design and dimensional data modeling. - Analytical Skills: Strong problem-solving abilities and attention to detail. - Communication: Excellent verbal and written communication skills, with the ability to explain technical concepts to non-technical stakeholders. - Education: Bachelor's degree in computer science, Engineering, Mathematics, or a related field. Advanced degrees or certifications are a plus. Interested candidates can share their CV at sulabh.tailang@celebaltech.com.,
Posted 3 days ago
3.0 - 7.0 years
0 Lacs
kolkata, west bengal
On-site
You will be responsible for utilizing advanced analytics, machine learning, and statistical modeling techniques to help the retail business make data-driven decisions. You will work closely with teams across marketing, product management, supply chain, and customer insights to drive business strategies and innovations. **Key Responsibilities:** - Utilize your deep understanding of the retail industry to design AI solutions that address critical retail business needs. - Gather and clean data from various retail sources like sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. - Apply machine learning algorithms like classification, clustering, regression, and deep learning to enhance predictive models. - Use AI-driven techniques for personalization, demand forecasting, and fraud detection. - Utilize advanced statistical methods to optimize existing use cases and build new products. - Stay updated on the latest trends in data science and retail technology. - Collaborate with executives, product managers, and marketing teams to translate insights into business actions. **Qualifications Required:** - Strong analytical and statistical skills. - Expertise in machine learning and AI. - Experience with retail-specific datasets and KPIs. - Proficiency in data visualization and reporting tools. - Ability to work with large datasets and complex data structures. - Strong communication skills to interact with both technical and non-technical stakeholders. - A solid understanding of the retail business and consumer behavior. - Programming Languages: Python, R, SQL, Scala - Data Analysis Tools: Pandas, NumPy, Scikit-learn, TensorFlow, Keras - Visualization Tools: Tableau, Power BI, Matplotlib, Seaborn - Big Data Technologies: Hadoop, Spark, AWS, Google Cloud - Databases: SQL, NoSQL (MongoDB, Cassandra) You will need a minimum of 3 years of experience and a Bachelor's or Master's degree in Data Science, Statistics, Computer Science, Mathematics, or a related field to qualify for this position.,
Posted 3 days ago
0.0 years
0 Lacs
bengaluru, karnataka, india
On-site
Ready to build the future with AI At Genpact, we don't just keep up with technology-we set the pace. AI and digital innovation are redefining industries, and we're leading the charge. Genpact's AI Gigafactory, our industry-first accelerator, is an example of how we're scaling advanced technology solutions to help global enterprises work smarter, grow faster, and transform at scale. From large-scale models to agentic AI, our breakthrough solutions tackle companies most complex challenges. If you thrive in a fast-moving, innovation-driven environment, love building and deploying cutting-edge AI solutions, and want to push the boundaries of what's possible, this is your moment. Genpact (NYSE: G) is an advanced technology services and solutions company that delivers lasting value for leading enterprises globally. Through our deep business knowledge, operational excellence, and cutting-edge solutions - we help companies across industries get ahead and stay ahead. Powered by curiosity, courage, and innovation, our teams implement data, technology, and AI to create tomorrow, today. Get to know us at genpact.com and on LinkedIn, X, YouTube, and Facebook. Inviting applications for the role of Manager - Enterprise Data Office In this role you will be responsible to Lead the design, development, and implementation of complex data solutions using Azure services like Azure Data Factory, MS Fabric, Datastage , Azure, and others Responsibilities . Strong skills in big data technologies and languages such as Python, Spark, & Pyspark . . Proficient in SQL and database technologies, with hands-on experience in data modeling and database design. . Architect and build scalable, secure, and performant data pipelines for data ingestion, transformation, and delivery. . Ensure data quality and consistency throughout the data lifecycle. . Implement data governance practices and ensure compliance with data security and privacy regulations. . Oversee the design and development of Data models, ETL processes, and data pipelines to support BI and analytics requirements. . Partner with IT and other cross-functional teams to ensure the successful integration and deployment of BI solutions across the organization. . Serve as a subject matter expert on BI-related topics, providing guidance and support to internal stakeholders as needed. . Design and develop Datastage ETL workflows and datasets in any ETL tool to be used by the BI Reporting tools like Power BI, Tableau, etc. Qualifications/Skillset Minimum qualifications: . Candidates from all the branches of M.Tech / B.E /B/Tech/Graduation are eligible. . Candidate must be Indian Citizens . Deep expertise in Modern Data Management (Enterprise Information Management, Data Warehousing, Data Lakes, Lakehouses , Cloud Data Platforms, Data Modelling, ETL Design, Databases etc ). . Prior experience in databases - DB2, ETL tools - Datastage and Azure cloud environment with Azure SQLDB2 is must. . Knowledge and experience in Reporting tools - Power BI, Tableau etc. . Excellent written and oral communication skills and ability to express complex technical concepts effectively. . Ability to work in a team-based environment, as well as the ability to work independently. . Ability to work effectively with people of many different disciplines with varying degrees of technical experience. Why join Genpact . Lead AI-first transformation - Build and scale AI solutions that redefine industries . Make an impact - Drive change for global enterprises and solve business challenges that matter . Accelerate your career-Gain hands-on experience, world-class training, mentorship, and AI certifications to advance your skills . Grow with the best - Learn from top engineers, data scientists, and AI experts in a dynamic, fast-moving workplace . Committed to ethical AI - Work in an environment where governance, transparency, and security are at the core of everything we build . Thrive in a values-driven culture - Our courage, curiosity, and incisiveness - built on a foundation of integrity and inclusion - allow your ideas to fuel progress Come join the 140,000+ coders, tech shapers, and growth makers at Genpact and take your career in the only direction that matters: Up. Let's build tomorrow together. Genpact is an Equal Opportunity Employer and considers applicants for all positions without regard to race, color, religion or belief, sex, age, national origin, citizenship status, marital status, military/veteran status, genetic information, sexual orientation, gender identity, physical or mental disability or any other characteristic protected by applicable laws. Genpact is committed to creating a dynamic work environment that values respect and integrity, customer focus, and innovation. Furthermore, please do note that Genpact does not charge fees to process job applications and applicants are not required to pay to participate in our hiring process in any other way. Examples of such scams include purchasing a %27starter kit,%27 paying to apply, or purchasing equipment or training.
Posted 3 days ago
7.0 - 12.0 years
27 - 30 Lacs
bengaluru
Work from Office
AZURE DATABRICKS (7-17 YEARS), IMMEDIATE TO 60 DAYS JOINER, BANGALORE LOCATION Hiring for 1 of BIG4, Immediate to 60 days Joiner, Data Bricks AM Level (7-10 Years) Data Bricks Manager Level (10-17 Years) D esigning and developing scalable data pipelines and ETL processes using Azure Databricks and PySpark , optimizing data workflows, ensuring data quality, collaborating with data scientists and other stakeholders, and integrating machine learning models. Key requirements involve hands-on experience with Azure Databricks, PySpark , SQL, and Azure services like Azure Data Factory, as well as strong knowledge of data warehousing and data modeling concepts. Qualification : - B.TECH, M.TECH, MCA, M.SC, B.SC IT, BCA
Posted 3 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |