Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
As a Functional Consultant ? Dynamics CRM, you will: Interact with clients to gather business requirements and understand their needs thoroughly. Use appropriate techniques to document and assess requirements to ensure they are clearly understood. Work closely with technical leads to analyze business requirements and suggest relevant technical solutions. Participate in design reviews, ensuring the application design aligns with quality standards. Write code and configure solutions as per the client?s specifications, adhering to best coding practices. Develop and extend Dynamics CRM functionalities through Power Platform tools (Power Apps, Power Automate, Power BI). Customize and configure Dynamics 365 modules (e.g., Sales, Customer Service, Marketing). Ensure all code developed follows established coding standards (organizational or client-specific). Complete tasks within the defined schedules, maintaining version control, and defect management. Contribute to the creation of project documentation as per the QMS (Quality Management System). Guide junior team members and provide them with necessary support in completing their tasks. Mentor and conduct knowledge-sharing sessions like workshops and presentations to enhance team skills. Facilitate workshops to train end-users and business stakeholders on CRM system usage. Assist with UAT and resolve any issues that arise during testing. Adhere to organizational quality and operational processes, and suggest improvements for the current processes. Ensure full compliance with defined standards and policies. Collaborate with cross-functional teams to integrate Dynamics CRM with other platforms like SharePoint, Power BI, and Power Automate. Skills Required : 5+ years of relevant experience in functional consulting and technical delivery, with a strong background in Microsoft Dynamics CRM Strong expertise with Dynamics 365 CRM modules (Customer Engagement) such as Sales, Customer Service, Field Service, and Marketing. Proficient with Power Platform tools like Power Apps (Canvas/Model-Driven), Power Automate, and Power BI for CRM customizations. Experience with Dynamics CRM customization, creating workflows, configuring business rules, and developing plugins. Familiarity with Azure cloud services, Dataverse, and security models to integrate and extend Dynamics 365 functionalities. Expertise in designing scalable and efficient business process flows within Dynamics 365 CRM. Experience conducting stakeholder interviews, requirement gathering, and performing gap analysis to ensure business objectives are met. Good to have Proficiency in JavaScript, .NET (C#), Web APIs, and Power Platform scripting. Knowledge of data migration tools such as KingswaySoft, SSIS, or other ETL tools to facilitate data migration is an added advantage.
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Specialty Development Senior 34263 Location: Chennai Employment Type: Full-Time (Hybrid) Job Overview We are looking for an experienced GCP Data Engineer to join a global data engineering team responsible for building a sophisticated data warehouse and analytics platform on Google Cloud Platform (GCP) . This role is ideal for professionals with a strong background in data engineering, cloud migration, and large-scale data transformation , particularly within cloud-native environments. Key Responsibilities Design, build, and optimize data pipelines on GCP to support large-scale data transformations and analytics. Lead the migration and modernization of legacy systems to cloud-based architecture. Collaborate with cross-functional global teams to support data-driven applications and enterprise analytics solutions. Work with large datasets to enable platform capabilities and business insights using GCP tools. Ensure data quality, integrity, and performance across the end-to-end data lifecycle. Apply agile development principles to rapidly deliver and iterate on data solutions. Promote engineering best practices in CI/CD, DevSecOps, and cloud deployment strategies. Must-Have Skills GCP Services: BigQuery, Dataflow, Dataproc, Data Fusion, Cloud Composer, Cloud Functions, Cloud SQL, Cloud Spanner, Cloud Storage, Bigtable, Pub/Sub, App Engine, Compute Engine, Airflow Programming & Data Engineering: 5+ years in data engineering and SQL development; experience in building data warehouses and ETL processes Cloud Experience: Minimum 3 years in cloud environments (preferably GCP), implementing production-scale data solutions Strong understanding of data processing architectures (batch/real-time) and tools such as Terraform, Cloud Build, and Airflow Experience with containerized microservices architecture Excellent problem-solving skills and ability to optimize complex data pipelines Strong interpersonal and communication skills with the ability to work effectively in a globally distributed team Proven ability to work independently in high-ambiguity scenarios and drive solutions proactively Preferred Skills GCP Certification (e.g., Professional Data Engineer) Experience in regulated or financial domains Migration experience from Teradata to GCP Programming experience with Python, Java, Apache Beam Familiarity with data governance, security, and compliance in cloud environments Experience coaching and mentoring junior data engineers Knowledge of software architecture, CI/CD, source control (Git), and secure coding standards Exposure to Java full-stack development (Spring Boot, Microservices, React) Agile development experience including pair programming, TDD, and DevSecOps Proficiency in test automation tools like Selenium, Cucumber, REST Assured Familiarity with other cloud platforms like AWS or Azure is a plus Education Bachelor’s Degree in Computer Science, Information Technology, or a related field (mandatory) Skills: python,gcp certification,microservices architecture,terraform,airflow,data processing architectures,test automation tools,sql development,cloud environments,agile development,ci/cd,gcp services: bigquery, dataflow, dataproc, data fusion, cloud composer, cloud functions, cloud sql, cloud spanner, cloud storage, bigtable, pub/sub, app engine, compute engine, airflow,apache beam,git,communication,problem-solving,data engineering,analytics,data,data governance,etl processes,gcp,cloud build,java
Posted 1 week ago
2.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: Azure Data Engineer Experience: 2-5 Years About the Company: EY is a leading global professional services firm offering a broad range of services in assurance, tax, transaction, and advisory services. We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. Your Key Responsibilities Develop & deploy azure databricks in a cloud environment using Azure Cloud services ETL design, development, and deployment to Cloud Service Interact with Onshore, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success 3 to 5 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions Extensive hands-on experience implementing data migration and data processing using Azure services: Databricks, ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Azure Data Catalog, Cosmo Db etc Hands on experience on spark Hands on experience in programming like python/scala Well versed in DevOps and CI/CD deployments Must have hands on experience in SQL and procedural SQL languages Strong analytical skills and enjoys solving complex technical problems To qualify for the role, you must have Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Strong analytical skills and enjoys solving complex technical problems Excellent debugging and optimization skills Experience in Enterprise grade solution implementations & in converting business problems/challenges to technical solutions considering security, performance, scalability etc Excellent communicator (written and verbal formal and informal). Participate in all aspects of solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Client management skills Education: BS/MS degree in Computer Science, Engineering, or a related subject is required. EY is committed to providing equal opportunities to all candidates. We welcome and encourage applications from candidates with diverse experiences and backgrounds. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 1 week ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Hiring for TOP MNC 2+ Years of Relevant Experience Mandatory Max 60 Days Documents Mandatory-PF, Education, Employment Docs Quality Assurance & Test Automation Focuses on testing frameworks, automation, and QA engineering: 1.QA Automation Engineer-Selenium, Java and SQL; Automation Testing with Java, Selenium, BDD, Cucumber-CHN, PUN-7 Positions 2.ETL Test Engineer-ETL Testing + Python, ETL DB Testing, ETL Automation-CHN, BLR, PUN-15 Positions Interested candidates can share resume to Mercy.b@liveconnectios.in / Whatsapp 7386771110
Posted 1 week ago
8.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Position - Data Warehouse Analyst Location - Navi Mumbai, Ghansoli Total Experience - 8 years Domain/Vertical - Insurance and Financial Services Qualifications Educational • MSc/MCS/BE/B. Sc: Computer Science / Math / IT Certification • Certifications in BI Tools or Cloud Data Platforms (Desirable) Technical Mandatory Skills: • 5 years of experience in Data Analysis. • 5 years’ experience in data modelling in a Data Warehouse. • Strong understanding of data warehousing concepts, ETL processes, and data modeling techniques. • Proficiency in SQL and experience working with relational databases. • BI visualization tool (e.g., Tableau, Power BI, Looker). • Cloud-based data warehousing solutions (e.g., Snowflake, BigQuery, Redshift). • Scripting languages (e.g., Python, R). Please contact - Saanvi Gandhi saanvi@hrpc.in | 77108 44668 - Whatsapp
Posted 1 week ago
10.0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Job Title SAP BW4HANA + Azure Lead Job Grade (refer to JE) Senior Manager 1 Function IT Sub-function SAP BW4HANA , Data Cloud Manager’s Job Label G7 Location: Sun House, Mumbai No. of Direct Reports (if any) Approx. 4 Business Unit IT Areas Of Responsibility At Sun Pharma, we commit to helping you “ Create your own sunshine ”— by fostering an environment where you grow at every step, take charge of your journey and thrive in a supportive community. Are You Ready to Create Your Own Sunshine? As you enter the Sun Pharma world, you’ll find yourself becoming ‘Better every day’ through continuous progress. Exhibit self-drive as you ‘Take charge’ and lead with confidence. Additionally, demonstrate a collaborative spirit, knowing that we ‘Thrive together’ and support each other’s journeys.” Job Summary We are looking for an experience profile with 10+ years of experience in SAP BW/ BW4HANA. We are searching for skilled lead data engineer and lead SAP BW / BW4HANA to join to lead our dynamic team. The role involves working closely with business stakeholders to understand business requirement and translating them into technical specifications and ensure successful deployment. Candidate has to drive the data engineering and SAP BW4HANA initiatives, follow best practice and design azure cloud landscape. Responsibilities Major experience in end to end implementation of SAP data warehouse platform Major experience in end to end implementation of Azure data and analytics platform End to End Setup BW4HANA landscape Hands-on experience in BW application area like SD, MM , PP, VC , PM , FICO . Hands-on experience in new technology like HANA , SQL etc Strong knowledge in Azure platform development along with SAP data warehouse Knowledge in analytical platform is good to have. In depth knowledge of info-providers like CP, ADSO and Open ODS. Knowledge of ETL from SAP transactional systems. Hands on experience on BW ABAP/AMDP scripts used in routines / transformations or customer exits. Resolving issues in process chains and user reports. Developing Queries in B4Hana and analytics using Analysis for Office Knowledge BO report development is good to have. Preparation of technical document. Monitor the system performance and make adjustments as needed. Travel Estimate As per project need Job Scope Internal Interactions (within the organization) With Business Project stakeholders External Interactions (outside the organization) With SMEs , CoEs , Project teams Geographical Scope Financial Accountability (cost/revenue with exclusive authority) Job Requirements Educational Qualification BSc.IT, BSc.CS, BE. Specific Certification Good to have - SAP BW4HANA , Azure Experience Minimum 10 years. Skill (Functional & Behavioural): Good Communication Skill, Analytical ability Your Success Matters to Us At Sun Pharma, your success and well-being are our top priorities! We provide robust benefits and opportunities to foster personal and professional growth. Join us at Sun Pharma, where every day is an opportunity to grow, collaborate, and make a lasting impact. Let’s create a brighter future together! Disclaimer: The preceding job description has been designed to indicate the general nature and level of work performed by employees within this classification. It is not designed to contain or be interpreted as a comprehensive inventory of all duties, responsibilities, and qualifications required of employees as assigned to this job. Nothing herein shall preclude the employer from changing these duties from time to time and assigning comparable duties or other duties commensurate with the experience and background of the incumbent(s).
Posted 1 week ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Opening: Azure Data Factory Developer Location: Noida (3 days work from office) Shift Timing: 2:00 PM – 10:30 PM IST Experience: 5–10 years Interview Process: 1st Round: Virtual 2nd Round: Face-to-Face Key Skills Required: Strong hands-on experience with Azure Data Factory (ADF) Proficiency in SQL Server development and optimization Experience in building and managing data pipelines in cloud environments Ability to troubleshoot and optimize ETL processes Familiarity with Azure services and cloud data architecture is a plus please share your resume at chakravarthy@moxieit.com
Posted 1 week ago
4.0 years
0 Lacs
Pune, Maharashtra, India
On-site
About Position: We are conducting an in-person hiring drive for the position of Data Engineer (Azure Data Bricks) in Pune & Bengaluru 2nd August 2025. Interview Location is mentioned below: Pune: Persistent Systems,– 9a, Aryabhata-Pingala, 12, Kashibai Khilare Path, Marg, Erandwane, Pune, Maharashtra 411004. Bangalore: Persistent Systems, The Cube at Karle Town Center Rd, Dada Mastan Layout, Manayata Tech Park, Nagavara, Bengaluru, Karnataka 560024. We are looking for an experienced Azure Data Engineer to join our growing team. The ideal candidate will have a strong background in working with Azure Databricks, DBT, Python/PySpark, SQL. You will work closely with our engineers and business teams to ensure optimal performance, scalability, and availability of our data pipelines. Role: Data Engineer (Azure Data Bricks) Job Location: Pune & Bengaluru Experience: 4+ Years Job Type: Full Time Employment What You'll Do: Design and implement complex, scalable data pipelines for ingestion, processing, and transformation using Azure technologies. Collaborate with Architects, Data Analysts, and Business Analysts to understand data requirements and develop efficient workflows. Develop and manage data storage solutions including Azure SQL Database, Data Lake, and Blob Storage. Leverage Azure Data Factory and other cloud-native tools to build and maintain ETL processes. Conduct unit testing and ensure quality of data pipelines; mentor junior engineers and review their deliverables. Monitor pipeline performance, troubleshoot issues, and provide regular status updates. Optimize data workflows for performance and cost-efficiency; implement automation to reduce manual effort. Expertise You'll Bring: Strong experience with Azure and Databricks Experience with Python /Pyspark Experience in SQL Database. Good to have experience in DBT and Dremio Benefits: Competitive salary and benefits package Culture focused on talent development with quarterly promotion cycles and company-sponsored higher education and certifications Opportunity to work with cutting-edge technologies Employee engagement initiatives such as project parties, flexible work hours, and Long Service awards Annual health check-ups Insurance coverage: group term life, personal accident, and Mediclaim hospitalization for self, spouse, two children, and parents Inclusive Environment: Persistent Ltd. is dedicated to fostering diversity and inclusion in the workplace. We invite applications from all qualified individuals, including those with disabilities, and regardless of gender or gender preference. We welcome diverse candidates from all backgrounds. We offer hybrid work options and flexible working hours to accommodate various needs and preferences. Our office is equipped with accessible facilities, including adjustable workstations, ergonomic chairs, and assistive technologies to support employees with physical disabilities. If you are a person with disabilities and have specific requirements, please inform us during the application process or at any time during your employment. We are committed to creating an inclusive environment where all employees can thrive. Our company fosters a values-driven and people-centric work environment that enables our employees to: Accelerate growth, both professionally and personally Impact the world in powerful, positive ways, using the latest technologies Enjoy collaborative innovation, with diversity and work-life wellbeing at the core Unlock global opportunities to work and learn with the industry’s best Let’s unleash your full potential at Persistent “Persistent is an Equal Opportunity Employer and prohibits discrimination and harassment of any kind.”
Posted 1 week ago
2.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Company Description WNS (Holdings) Limited (NYSE: WNS), is a leading Business Process Management (BPM) company. We combine our deep industry knowledge with technology and analytics expertise to co-create innovative, digital-led transformational solutions with clients across 10 industries. We enable businesses in Travel, Insurance, Banking and Financial Services, Manufacturing, Retail and Consumer Packaged Goods, Shipping and Logistics, Healthcare, and Utilities to re-imagine their digital future and transform their outcomes with operational excellence.We deliver an entire spectrum of BPM services in finance and accounting, procurement, customer interaction services and human resources leveraging collaborative models that are tailored to address the unique business challenges of each client. We co-create and execute the future vision of 400+ clients with the help of our 44,000+ employees. Job Description Key responsibilities/Skillsets will include but are not limited to: 2+ years of directly relevant experience Python, SQL, ETL Tools, Machine Learning, Dashboard (Power BI, Tableau), Market Research, Credit Analysis High competence in stake holding, time management and project Management skills Strong Written and Verbal Communication Skills Excellent analytical and problem solving skills Qualifications Graduate Additional Information Nigh Shifts/Rotational Shifts
Posted 1 week ago
2.0 - 5.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
At EY, we’re all in to shape your future with confidence. We’ll help you succeed in a globally connected powerhouse of diverse teams and take your career wherever you want it to go. Join EY and help to build a better working world. Job Title: Azure Data Engineer Experience: 2-5 Years About the Company: EY is a leading global professional services firm offering a broad range of services in assurance, tax, transaction, and advisory services. We’re looking for candidates with strong technology and data understanding in big data engineering space, having proven delivery capability. Your Key Responsibilities Develop & deploy azure databricks in a cloud environment using Azure Cloud services ETL design, development, and deployment to Cloud Service Interact with Onshore, understand their business goals, contribute to the delivery of the workstreams Design and optimize model codes for faster execution Skills And Attributes For Success 3 to 5 years of experience in developing data ingestion, data processing and analytical pipelines for big data, relational databases, NoSQL, and data warehouse solutions Extensive hands-on experience implementing data migration and data processing using Azure services: Databricks, ADLS, Azure Data Factory, Azure Functions, Synapse/DW, Azure SQL DB, Azure Data Catalog, Cosmo Db etc Hands on experience on spark Hands on experience in programming like python/scala Well versed in DevOps and CI/CD deployments Must have hands on experience in SQL and procedural SQL languages Strong analytical skills and enjoys solving complex technical problems To qualify for the role, you must have Have working experience in an Agile base delivery methodology (Preferable) Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Strong analytical skills and enjoys solving complex technical problems Excellent debugging and optimization skills Experience in Enterprise grade solution implementations & in converting business problems/challenges to technical solutions considering security, performance, scalability etc Excellent communicator (written and verbal formal and informal). Participate in all aspects of solution delivery life cycle including analysis, design, development, testing, production deployment, and support. Client management skills Education: BS/MS degree in Computer Science, Engineering, or a related subject is required. EY is committed to providing equal opportunities to all candidates. We welcome and encourage applications from candidates with diverse experiences and backgrounds. EY | Building a better working world EY is building a better working world by creating new value for clients, people, society and the planet, while building trust in capital markets. Enabled by data, AI and advanced technology, EY teams help clients shape the future with confidence and develop answers for the most pressing issues of today and tomorrow. EY teams work across a full spectrum of services in assurance, consulting, tax, strategy and transactions. Fueled by sector insights, a globally connected, multi-disciplinary network and diverse ecosystem partners, EY teams can provide services in more than 150 countries and territories.
Posted 1 week ago
12.0 years
0 Lacs
India
On-site
We are seeking a highly skilled and experienced AWS Architect with a strong background in Data Engineering and expertise in Generative AI. In this pivotal role, you will be responsible for designing, building, and optimizing scalable, secure, and cost-effective data solutions that leverage the power of AWS services, with a particular focus on integrating and managing Generative AI capabilities. The ideal candidate will possess a deep understanding of data architecture principles, big data technologies, and the latest advancements in Generative AI, including Large Language Models (LLMs) and Retrieval Augmented Generation (RAG). You will work closely with data scientists, machine learning engineers, and business stakeholders to translate complex requirements into robust and innovative solutions on the AWS platform. Responsibilities: • Architect and Design: Lead the design and architecture of end-to-end data platforms and pipelines on AWS, incorporating best practices for scalability, reliability, security, and cost optimization. • Generative AI Integration: Architect and implement Generative AI solutions using AWS services like Amazon Bedrock, Amazon SageMaker, Amazon Q, and other relevant technologies. This includes designing RAG architectures, prompt engineering strategies, and fine-tuning models with proprietary data (knowledge base). • Data Engineering Expertise: Design, build, and optimize ETL/ELT processes for large-scale data ingestion, transformation, and storage using AWS services such as AWS Glue, Amazon S3, Amazon Redshift, Amazon Athena, Amazon EKS and Amazon EMR. • Data Analytics: Design, build, and optimize analytical solutions for large-scale data ingestion, analytics and insights using AWS services such as AWS Quicksight • Data Governance and Security: Implement robust data governance, data quality, and security measures, ensuring compliance with relevant regulations and industry best practices for both traditional data and Generative AI applications. • Performance Optimization: Identify and resolve performance bottlenecks in data pipelines and Generative AI workloads, ensuring efficient resource utilization and optimal response times. • Technical Leadership: Act as a subject matter expert and provide technical guidance to data engineers, data scientists, and other team members. Mentor and educate on AWS data and Generative AI best practices. • Collaboration: Work closely with cross-functional teams, including product owners, data scientists, and business analysts, to understand requirements and deliver impactful solutions. • Innovation and Research: Stay up-to-date with the latest AWS services, data engineering trends, and advancements in Generative AI, evaluating and recommending new technologies to enhance our capabilities. • Documentation: Create comprehensive technical documentation, including architectural diagrams, design specifications, and operational procedures. • Cost Management: Monitor and optimize AWS infrastructure costs related to data and Generative AI workloads. Required Skills and Qualifications: • 12+ years of experience in data engineering, data warehousing, or big data architecture. • 5+ years of experience in an AWS Architect role, specifically with a focus on data. • Proven experience designing and implementing scalable data solutions on AWS. • Strong hands-on experience with core AWS data services, including: o Data Storage: Amazon S3, Amazon Redshift, Amazon DynamoDB, Amazon RDS o Data Processing: AWS Glue, Amazon EMR, Amazon EKS, AWS Lambda, Informatica o Data Analytic: Amazon Quicksight, Amazon Athena, Tableau o Data Streaming: Amazon Kinesis, AWS MSK o Data Lake: AWS Lake Formation • Strong competencies in Generative AI, including: o Experience with Large Language Models (LLMs) and Foundation Models (FMs). o Hands-on experience with Amazon Bedrock (including model customization, agents, and orchestrations). o Understanding and experience with Retrieval Augmented Generation (RAG) architectures and vector databases (e.g., Amazon OpenSearch Service for vector indexing). o Experience with prompt engineering and optimizing model responses. o Familiarity with Amazon SageMaker for building, training, and deploying custom ML/Generative AI models. o Knowledge of Amazon Q for business-specific Generative AI applications. • Proficiency in programming languages such as Python (essential), SQL, and potentially Scala or Java. • Experience with MLOps/GenAIOps principles and tools for deploying and managing Generative AI models in production. • Solid understanding of data modeling, data warehousing concepts, and data lake architectures. • Experience with CI/CD pipelines and DevOps practices on AWS. • Excellent communication, interpersonal, and presentation skills, with the ability to articulate complex technical concepts to both technical and non-technical audiences. • Strong problem-solving and analytical abilities. Preferred Qualifications: • AWS Certified Solutions Architect – Professional or AWS Certified Data Engineer – Associate/Specialty. • Experience with other Generative AI frameworks (e.g., LangChain) or open-source LLMs. • Familiarity with containerization technologies like Docker and Kubernetes (Amazon EKS). • Experience with data transformation tools like Informatica, Matillion • Experience with data visualization tools (e.g., Amazon QuickSight, Tableau, Power BI). • Knowledge of data governance tools like Amazon DataZone. • Experience in a highly regulated industry (e.g., Financial Services, Healthcare).
Posted 1 week ago
4.0 years
0 Lacs
India
Remote
Greetings!!! Role:- Senior Data Modelling Engineer with GCP, SQL, cognos, ETL Experience:- 4+ Years Location- Remote Duration: 4 Months Contract Required Skills & Experience: - Extensive experience with SQL, including writing complex queries and optimizing database performance (Must have) ● Demonstrated expertise in data modeling techniques, including dimensional modeling, 3NF structures, and denormalized views (Must have) ● Hands-on experience with Google Cloud Platform (GCP) services, particularly those related to data storage, processing, and analytics (Must have) ● Experience with BigQuery, Cloud Dataflow, Dataplex, Dataform, and Cloud Pub/Sub (Must have) ● Basic knowledge of Cognos and Datastage. ● DBT knowledge would be an add-on. ● Strong background in building and maintaining data warehouses and data lakes ● Experience with ETL/ELT processes and big data technologies If you are interested , please share your resume to prachi@iitjobs.com
Posted 1 week ago
2.0 years
0 Lacs
India
On-site
The Role We are hiring an AI/ML Developer (India), to join our India team, in support of a large global client! You will be responsible for developing, deploying, and maintaining AI and machine learning models. Your expertise in Python, cloud services, databases, and big data technologies will be instrumental in creating scalable and efficient AI applications. What You Will Be Doing •Develop, train, and deploy machine learning models for predictive analytics, classification, and clustering. •Implement AI-based solutions using frameworks such as TensorFlow, PyTorch, and Scikit-learn. •Work with cloud platforms including AWS (SageMaker, Lambda, S3), Azure, and Google Cloud (Vertex AI). •Integrate and fine-tune Hugging Face transformer models (e.g., BERT, GPT) for NLP tasks such as text classification, summarization, and sentiment analysis. •Develop AI automation solutions, including chatbot implementations using Microsoft Teams and Azure AI. •Work with big data technologies such as Apache Spark and Snowflake for large-scale data processing and analytics. •Design and optimize ETL pipelines for data quality management, transformation, and validation. •Utilize SQL, MySQL, PostgreSQL, and MongoDB for database management and query optimization. •Create interactive data visualizations using Tableau and Power BI to drive business insights. •Work with Large Language Models (LLMs) for AI-driven applications, including fine-tuning, training, and deploying model for conversational AI, text generation, and summarization. •Develop and implement Agentic AI systems, enabling autonomous decision-making AI agents that can adapt, learn, and optimize tasks in real-time. What You Bring Along •2+ years of experience applying AI to practical uses. •Strong programming skills in Python, SQL, and experience with ML frameworks such as TensorFlow, PyTorch, or Scikit-learn. •Knowledge of basic algorithms and object-oriented and functional design principles •Proficiency in using data analytics libraries like Pandas, NumPy, Matplotlib, and Seaborn. •Hands-on experience with cloud platforms such as AWS, Azure, and Google Cloud. •Experience with big data processing using Apache Spark and Snowflake. •Knowledge of NLP and AI model implementations using Hugging Face and cloud-based AI services. •Strong understanding of database management, query optimization, and data warehousing. •Experience with data visualization tools such as Tableau and Power BI. •Ability to work in a collaborative environment and adapt to new AI technologies. •Strong analytical and problem solving skills. Education: •Bachelor’s degree in computer science, Data Science, AI/ML, or a related field.
Posted 1 week ago
5.0 years
0 Lacs
Andhra Pradesh, India
On-site
A career in our Advisory Service Delivery Centre is the natural extension of PwC’s leading class global delivery capabilities. We provide premium, cost effective, high quality services that support process quality and delivery capability in support for client engagements. Responsibilities As a Senior Associate, you’ll work as part of a team of problem solvers with extensive consulting and industry experience, helping our clients solve their complex business issues from strategy to execution. Specific responsibilities include but are not limited to: Proactively assist in the management of several clients, while reporting to Managers and above Train and lead staff Establish effective working relationships directly with clients Contribute to the development of your own and team’s technical acumen Keep up to date with local and national business and economic issues Be actively involved in business development activities to help identify and research opportunities on new/existing clients Continue to develop internal relationships and your PwC brand Job Description: SAP ABAP with either BODS/HANA/PI/UI5-Fiori Roles/Responsibilities Understand client requirements, provide solutions, functional specifications and implement technical components accordingly. Ability to create Technical Design Documents (TDD) and Unit Test documents for the technical solutions being implemented. Excellent Communication, analytical and Interpersonal skills as a Consultant and play key role in implementations from Blueprint to Go-Live In addition to the above the candidate should have been involved in the following during the life cycle of SAP implementation: Unit Testing, Integration Testing User Support activities Exposure to ASAP and other structured implementation methodologies Regularly interact with the onsite team/client Provide status updates in daily/weekly conference calls Maintain cordial relationship with onsite team/client Required Experience 5 to 9 years of hands on experience in ABAP development 2 years in Odata development using SAP Gateway Strong Knowledge in Forms (SAP Scripts / Smart Forms/Adobe Forms), Reports (ALV / Classical), Interfaces (ALE/IDOC, BAPI), Conversions (LSMW/BDC), Enhancements (User Exits, BADI, Enhancement Spots), Object Oriented ABAP, Workflows (Development, Configuration) Odata ( SAP ODATA Framework, Eclipse IDE and SAP Web IDE, OData service creation and Implementation ) Good experience in building OData services using NetWeaver Gateway and ABAP. Should have done at least 2 SAP Implementation / Rollout projects Familiarity on the basic business processes with any of the following Functional Areas: SAP Financials (FI/CO/PS) SAP Logistics (SD/MM/ PP/PM) SAP HR Should have at least 1 year working experience in either 1 of the below skills: SAP BODS SAP HANA SAP PI/PO SAP UI5/Fiori Bods Details of above combination skills Strong hands on SAP BODS resource with 4+ years of experience. ETL design and implementation involving extraction and provisioning of data from a variety of legacy systems. Should be well versed in design, development and implementation with SAP and non - SAP data sources. End to end implementation experience with at least two full life cycle implementations is a must. At least one SAP BODS 4.1 project implementation experience Experience in Data Migration projects between various application databases Expertise to handle data provisioning and error handling from various sources including Protean, SAP ECC, MS Dynamics and Platinum systems Strong SQL/PL SQL programming skills Performance Tuning and Optimization experience Experience with admin console, designer and server manager tools Pi/Po Strong hands on experience in PI/PO/HCI development Should have at least 4 years hands on experience in using PI, PO to design and build A2A, B2B integrations Should be proficient in developing ESR and IR objects, Graphical and Java mapping and proficient on XML Technologies UI5/Fiori Strong SAP UI5 Developer with real time working experience of 3+ years having worked in Minimum of 3 end to end SAP UI5 Implementations SAP UI5 development experience in developing / enhancing SAPUI5 and SAP Fiori Apps Understand web development framework which includes HTML5, CSS, Javascript and JQuery Experience in developing SAPUI5 solutions using Eclipse and SAP WebIDE Nice To Have Good Experience in SAP UI5/Fiori App development, implementation and configuration Good Experience in SAP HANA - CDS Views Good Experience in using SAP BOPF Framework Education: B.tech, M.tech, MBA, M.com, B.E, B.A, B.com
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Key Responsibilities: Design, develop, and maintain high-performance ETL and real-time data pipelines using Apache Kafka and Apache Flink. Build scalable and automated MLOps pipelines for model training, validation, and deployment using AWS SageMaker and related services. Implement and manage Infrastructure as Code (IaC) using Terraform for AWS provisioning and maintenance. Collaborate with ML, Data Science, and DevOps teams to ensure reliable and efficient model deployment workflows. Optimize data storage and retrieval strategies for both structured and unstructured large-scale datasets. Integrate and transform data from multiple sources into data lakes and data warehouses. Monitor, troubleshoot, and improve performance of cloud-native data systems in a fast-paced production setup. Ensure compliance with data governance, privacy, and security standards across all data operations. Document data engineering workflows and architectural decisions for transparency and maintainability. Requirements 5+ Years of experience as Data Engineer or in similar role Proven experience in building data pipelines and streaming applications using Apache Kafka and Apache Flink. Strong ETL development skills, with deep understanding of data modeling and data architecture in large-scale environments. Hands-on experience with AWS services, including SageMaker, S3, Glue, Lambda, and CloudFormation or Terraform. Proficiency in Python and SQL; knowledge of Java is a plus, especially for streaming use cases. Strong grasp of MLOps best practices, including model versioning, monitoring, and CI/CD for ML pipelines. Deep knowledge of IaC tools, particularly Terraform, for automating cloud infrastructure. Excellent analytical and problem-solving abilities, especially with regard to data processing and deployment issues. Agile mindset with experience working in fast-paced, iterative development environments. Strong communication and team collaboration skills.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Company Qualcomm India Private Limited Job Area Engineering Group, Engineering Group > Software Engineering General Summary As a leading technology innovator, Qualcomm pushes the boundaries of what's possible to enable next-generation experiences and drives digital transformation to help create a smarter, connected future for all. As a Qualcomm Software Engineer, you will design, develop, create, modify, and validate embedded and cloud edge software, applications, and/or specialized utility programs that launch cutting-edge, world class products that meet and exceed customer needs. Qualcomm Software Engineers collaborate with systems, hardware, architecture, test engineers, and other teams to design system-level software solutions and obtain information on performance requirements and interfaces. Minimum Qualifications Bachelor's degree in Engineering, Information Systems, Computer Science, or related field. Senior Engineer: Job Title: Senior Machine Learning & Data Engineer We are looking for a highly skilled and experienced Machine Learning & Data Engineer to join our team. This hybrid role blends the responsibilities of a data engineer and a machine learning engineer, with a strong emphasis on Python development. You will be instrumental in designing scalable data pipelines, building and deploying ML/NLP models, and enabling data-driven decision-making across the organization. Key Responsibilities Data Engineering & Infrastructure Design and implement robust ETL pipelines and data integration workflows using SQL, NoSQL, and big data technologies (e.g., Spark, Hadoop). Optimize data storage and retrieval using relational and non-relational databases (e.g., PostgreSQL, MongoDB, Cassandra). Ensure data quality, validation, and governance across systems. Develop and maintain data models and documentation for data flows and architecture. Machine Learning & NLP Build, fine-tune, and deploy ML/NLP models using frameworks like TensorFlow, PyTorch, and Scikit-learn. Apply advanced NLP techniques including Transformers, BERT, and LLM fine-tuning. Implement Retrieval-Augmented Generation (RAG) pipelines using LangChain, LlamaIndex, and vector databases (e.g., FAISS, Milvus). Operationalize ML models using APIs, model registries (e.g., Hugging Face), and cloud services (e.g., SageMaker, Azure ML). Python Development Develop scalable backend services using Python frameworks such as FastAPI, Flask, or Django. Automate data workflows and model training pipelines using Python libraries (e.g., Pandas, NumPy, SQLAlchemy). Collaborate with cross-functional teams to integrate ML solutions into production systems. Collaboration & Communication Work closely with data scientists, analysts, and software engineers in Agile/Scrum teams. Translate business requirements into technical solutions. Maintain clean, well-documented code and contribute to knowledge sharing. Required Qualifications Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Proven experience in both data engineering and machine learning roles. Strong Python programming skills and experience with modern Python libraries and frameworks. Deep understanding of ML/NLP concepts and practical experience with LLMs and RAG architectures. Proficiency in SQL and experience with both SQL and NoSQL databases. Experience with big data tools (e.g., Spark, PySpark) and cloud platforms (AWS, Azure). Familiarity with data visualization tools like Power BI or Tableau. Excellent problem-solving, communication, and collaboration skills. Engineer: Job Title : Automation Engineer Job Description We are seeking a skilled and experienced Automation Engineer to join our team. As a C#/Python Developer, you will play a pivotal role in developing and deploying advanced solutions to drive our Product Test automation. You will collaborate closely with Testers, product managers, and stakeholders to ensure the successful implementation and operation of Automation solutions. The ideal candidate will have a strong background in API development with C# programming and python, with experience in deploying scalable solutions. Responsibilities Design, develop, and maintain core APIs using mainly C#. Collaborate with cross-functional teams to understand requirements and implement API solutions. Create and execute unit tests for APIs to ensure software quality. Identify, analyze, and troubleshoot issues in API development and testing. Continuously improve and optimize API development processes. Document API specifications, procedures, and results. Stay updated with the latest industry trends and technologies in API development. Requirements Bachelor's degree in Computer Science, Engineering, or related field. Proven experience in developing APIs and scripts/apps using C# and python. Knowledge in python is a plus. Experience in using visual studio for development Experience in wireless domain will be a plus Strong understanding of software testing principles and methodologies. Proficiency in C# programming language. Experience with Test Automation tools and best practices Familiarity with CI/CD pipelines and version control systems (e.g., Perforce). Excellent problem-solving skills and attention to detail. Strong communication and teamwork skills. Applicants : Qualcomm is an equal opportunity employer. If you are an individual with a disability and need an accommodation during the application/hiring process, rest assured that Qualcomm is committed to providing an accessible process. You may e-mail disability-accomodations@qualcomm.com or call Qualcomm's toll-free number found here. Upon request, Qualcomm will provide reasonable accommodations to support individuals with disabilities to be able participate in the hiring process. Qualcomm is also committed to making our workplace accessible for individuals with disabilities. (Keep in mind that this email address is used to provide reasonable accommodations for individuals with disabilities. We will not respond here to requests for updates on applications or resume inquiries). Qualcomm expects its employees to abide by all applicable policies and procedures, including but not limited to security and other requirements regarding protection of Company confidential information and other confidential and/or proprietary information, to the extent those requirements are permissible under applicable law. To all Staffing and Recruiting Agencies : Our Careers Site is only for individuals seeking a job at Qualcomm. Staffing and recruiting agencies and individuals being represented by an agency are not authorized to use this site or to submit profiles, applications or resumes, and any such submissions will be considered unsolicited. Qualcomm does not accept unsolicited resumes or applications from agencies. Please do not forward resumes to our jobs alias, Qualcomm employees or any other company location. Qualcomm is not responsible for any fees related to unsolicited resumes/applications. If you would like more information about this role, please contact Qualcomm Careers.
Posted 1 week ago
0 years
0 Lacs
Mumbai Metropolitan Region
On-site
Roles and Responsibilities: Work as part of RDS ETL dev team in the implementation of RDS Risk specific deliverables, loading data from specific source systems, transformation logic implementation Design, Development of various modules and models involved in the Market risk projects. Implement small enhancements align with upstream changes, liaise with teams with Risk Technology to load data to create consistent environments. Work with Business analysts to eliminate differences in environments - use some current applications (java based) to do ETL along with SSIS. Ownership of project work during difference phases from initiation, development, unit testing to QA, UAT, Staging and Production. Enhance existing ETL tool to make them compliant with windows 2022 and Nomura private cloud. Regional L3 coverage. Providing regional technical SME input to the production services / L2 team Mind set: Working experience and strong technical knowledge of MS SQL Server Integration Services/ETL Developer, MS SQL Server Working Knowledge of Data warehousing, slowly changing dimension concepts & dimension modelling, Autosys job scheduling, Unix shell scripting, C# Able to troubleshoot problems in multiple environments in a stack with diverse technology. Strong technical and analytical skills. Knowledge and Working experience in DevOps tools such as Jenkins, Git, Ansible and SonarQube. Excellent communication skills, ability to multi-task, and work towards tight deadlines. Self-starter who will be able learn quickly and able to perform in flexibility working hours. Skillset Expertise of MS SQL , MS SQL Server Integration Services , C# scripting applicable for SQL Server Integration Services - core skills Strong data investigation and problem-solving & debug skills
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Greetings ! One our our client TOP MNC Giant looking for Data Scientist Important Notes: Only person who can join immediately or within 7 days ONLY APPLY Base Locations: Gurgaon and Bengaluru (hybrid setup 3 days work from office). Role: Data Scientist Exp: 4 to 8 Years Immediate Joiners Only Skills (must have) Bachelor’s or master’s degree in computer science, Data Science, Engineering, or a related field. Strong programming skills in languages such as Python, SQL etc. Experience in developing and deploying AI/ML and deep learning solutions with libraries and frameworks, such as Scikit-learn, TensorFlow, PyTorch etc. Experience in ETL and Datawarehouse tools such as Azure Data Factory,Azur e Data Lake or Databricks etc. Knowledge of math, probability, and statistics. Familiarity with a variety of ML algorithms. Good experience in cloud infrastructure such as Azure (Preferred), AWS/GCP Exposure to Gen AI, Vector DB, LLM (Large language Model) Skills (good to have) Experience in Flask/Django, Streamlit is a bonus Experience with MLOps: MLFlow, Kubeflow, CI/CD Pipeline etc. Good to have experience in Docker, Kubernetes etc Collaborate with software engineers, business stake holders and/or domain experts to translate business requirements into product features, tools, projects, AI/ML, NLP/NLU and deep learning solutions. Develop, implement, and deploy AI/ML solutions. Preprocess and analyze large datasets to identify patterns, trends, and insights. Evaluate, validate, and optimize AI/ML models to ensure their accuracy, efficiency, and generalizability. Deploy applications and AI/ML model into cloud environment such as AWS/Azure/GCP etc. Monitor and maintain the performance of AI/ML models in production environments, identifying opportunities for improvement and updating models as needed. Document AI/ML model development processes, results, and lessons learned to facilitate knowledge sharing and continuous improvement. INTERESTED CANDIDATES PERFECT MATCH TO THE JD AND WHO CAN JOIN ASAP ONLY DO APPLY ALONG WITH BELOW MENTIONED DETAILS : Total exp : Relevant exp in Data Scientist : Applying for Gurgaon and Bengaluru : Open for Hybrid : Current CTC : Expected CTC : Can join ASAP : Will call you once we receive your updated profile along with above mentioned details. Thanks, Venkat Solti solti.v@anlage.co.in
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description Some careers shine brighter than others. If you’re looking for a career that will help you stand out, join HSBC and fulfil your potential. Whether you want a career that could take you to the top, or simply take you in an exciting new direction, HSBC offers opportunities, support and rewards that will take you further. HSBC is one of the largest banking and financial services organisations in the world, with operations in 64 countries and territories. We aim to be where the growth is, enabling businesses to thrive and economies to prosper, and, ultimately, helping people to fulfil their hopes and realise their ambitions. We are currently seeking an experienced professional to join our team in the role of Software Engineer In this role, you will : The ETL Developer is responsible for performing system development work around ETL, which can include both the development of new functional requirements and supporting the live systems. There are a number of job functions within the role and the job holder may specialize in a single function or alternatively a combination of functions. The role involves taking instructions, usually in written or diagrammatic form, and translating them into a code. Prepare all ETL processes according to business requirements.Responsible for creating functional and/or technical design solutions and ensuring those requirements are documented. Diagnose ETL and database related issues, perform root cause analysis and recommend corrective actions to management. Responsible to perform tests and validate the data flow Manage current and future needs of the data design and content. Work with a project team to support the design, development, implementation, monitoring, and maintenance of new ETL programs. Have good knowledge on cloud technologies and supporting batch processes. Effectively contributed to project which requires programming skills. Requirements To be successful in this role, you should meet the following requirements: Bachelor’s degree or International Equivalent Excellent written and verbal communication skills; presentation skills preferred. Focus on detail and consistency. Strong prioritization and time management skills Experience in Financial domain (Banking). Good knowledge and experience on ETL, Data Stage, DB2, Teradata, Oracle, Unix Self-motivated, focused, detailed oriented and able to work efficiently to deadlines are essential. Ability to work with a degree of autonomy while also being able to work in a collaborative team environment. High degree of personal integrity Experienced in Development of UNIX shell scripts for enhancing the ETL job performance. Hands-on experience applying modeling techniques to actual business solutions. Experience in designing, developing, and implementing Business Intelligence tools. Understanding and Experience on Unix/Linux system, file systems, shell scripting.Strong knowledge of scheduling tools such as Control- M You’ll achieve more when you join HSBC. www.hsbc.com/careers HSBC is committed to building a culture where all employees are valued, respected and opinions count. We take pride in providing a workplace that fosters continuous professional development, flexible working and opportunities to grow within an inclusive and diverse environment. Personal data held by the Bank relating to employment applications will be used in accordance with our Privacy Statement, which is available on our website. Issued by – HSBC Software Development India
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Immediate Opportunity for the role of ServiceNow CMDB Developer Key Responsibilities: Design, implement, and maintain the end-to-end ServiceNow CMDB to ensure data accuracy, completeness, and compliance. Define and enforce CI class structures, relationships, and data models. Configure and optimize ServiceNow Discovery and Service Mapping to ensure accurate CI population. Troubleshoot discovery issues and improve coverage and accuracy. Integrate CMDB with external data sources using APIs, MID Servers, and ETL tools. Monitor and improve CMDB health using dashboards and reports. Collaborate with stakeholders to define CMDB governance policies and procedures. Ensure alignment with ITIL and Configuration Management best practices. Work closely with Incident, Problem, Change, and Asset Management teams. Provide technical leadership and mentoring to junior team members. Develop and maintain CMDB reports, dashboards, and documentation. Present CMDB health and compliance metrics to leadership. Required Skills & Qualifications: 5-10 years of hands-on experience with ServiceNow, specifically CMDB, Discovery, and Service Mapping. Strong understanding of IT infrastructure (servers, networks, applications, cloud). Proficiency in scripting (JavaScript, Glide), data modeling, and integrations. Experience with MID Servers, APIs, and integration tools. Familiarity with ITIL v3/v4 framework and Configuration Management processes. Excellent problem-solving, communication, and stakeholder management skills. ServiceNow Certified System Administrator (CSA) and CMDB/Discovery certifications preferred. Interested candidates can drop their CV at prathvi@scarletwireless.com
Posted 1 week ago
5.0 - 12.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
Data Software Engineer Chennai & Coimbatore Walkin in on 2 Aug 25 Hybrid Role 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with AZURE Databricks Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology
Posted 1 week ago
8.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
Remote
Role: Data Engineer Location: Coimbatore | Hyderabad | Remote Databricks environment using PySpark 1. 5~8 years as hands on Data Engineers. 2. Good Data Analysis skills is a must. 3. Hands on experience in designing, developing, and maintaining scalable data pipelines. 4. Implementing ETL processes, and ensuring data quality and performance within the Databricks environment using PySpark 5. Experience on Data warehousing concepts, data modelling and metadata management is a plus. 6. Display Good communication skills, especially customer interfacing skills.
Posted 1 week ago
10.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Job Title : Data Lead. Experience : 8-10 Yrs Location: Noida Skills Set: Looking for a Data Lead with strong experience in DBT, Snowflake, Azure Cloud, and DevOps. Should have hands-on expertise in data pipeline design, cloud deployments, and managing end-to-end data workflows. Key Responsibilities: Lead the design, development, and maintenance of scalable data pipelines and architectures. Manage end-to-end data workflows, ensuring accuracy, consistency, and security of data. Implement and optimize data models using DBT and Snowflake. Deploy and manage cloud-based data infrastructure on Azure. Collaborate with DevOps teams to ensure smooth integration and deployment processes. Monitor data quality, performance, and operational issues, and implement improvements. Provide technical leadership and mentorship to junior data engineers. Work closely with stakeholders to understand business requirements and translate them into technical solutions. Required Skills & Qualifications: 8–10 years of experience in data engineering or related roles. Strong hands-on experience with DBT, Snowflake, and Azure Cloud. Solid understanding of DevOps practices and CI/CD pipelines in a cloud environment. Proficiency in data warehousing, ETL/ELT processes, and data modeling. Experience with scripting languages such as Python or SQL. Excellent problem-solving, communication, and stakeholder management skills. Proven ability to lead and deliver high-impact data projects end-to-end.
Posted 1 week ago
10.0 years
0 Lacs
Ahmedabad, Gujarat, India
Remote
VSD Technologies Pvt. Ltd." - SAP Gold Partner bring an opportunity for all the jobseekers #SAPDatasphere #remote #contract #Qatar SAP Datasphere Consultant Experience- 10+ Years Location- #Remote Qatar time zone .. Sunday to Thursday Duration –6 month contract Notice Period - #Immediate to 30 days Kindly send me your updated profile in word format ram.tripathi@vsdtechno.com M:-+91 9978482390 o Design and implement SAP Datasphere solutions based on clients’ data management and analytics needs. o Configure and integrate SAP Datasphere with existing data systems and business applications. o Oversee the deployment of SAP Datasphere, including data integration, storage. o Develop and recommend data strategies and architectures using SAP Datasphere. o Analyze and assess clients’ current data landscapes to identify opportunities for optimization. o Design data models, data pipelines, and data workflows to meet clients’ business requirements. o Implement data integration processes and ensure data quality and consistency within SAP Datasphere. o Develop and configure data pipelines for ETL (Extract, Transform, Load) processes. o Manage data governance, security, and compliance within SAP Datasphere. o Create and manage dashboards, reports, and visualizations using SAP Datasphere’s and SAP analytics Cloud capabilities. Experience: Minimum of 10 years of experience with SAP data solutions, including SAP Business Warehouse, #AnalyticsCloud, SAP #Datasphere. Prior consulting experience is a plus. Technical Skills: Proficiency in SAP Datasphere, SAP HANA, SAP Data #Intelligence, and related SAP technologies. Experience with data #integration, #datawarehousing, and #analytics tools. Virandra Sharda Nainesh Patadia Nilay Upadhyay Ram Tripathi Dipak vadher Mitali Rathod VSD Technologies Pvt. Ltd. - SAP Gold Partner
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Oracle Retail Techno-Functional Consultant: Location: Noida/Hyderabad only Role Summary: The Oracle Retail Techno-Functional Consultant will be responsible for delivering end-to-end solutions across Oracle Retail modules such as RMS, ReIM, ReSA, RPM, and POS. The role requires a blend of functional expertise and technical proficiency to support implementations, enhancements, and support activities. Key Responsibilities: Functional Responsibilities: Analyze business requirements and translate them into Oracle Retail configurations. Lead workshops and gather requirements for merchandising, pricing, invoice matching, and sales audit. Configure Oracle Retail modules (RMS, RPM, ReIM, ReSA, POS) to meet business needs. Prepare functional design documents, test cases, and training materials. Support UAT, go-live, and post-implementation activities. Technical Responsibilities: Develop and customize Oracle Retail applications using PL/SQL, Java, and batch frameworks. Troubleshoot and resolve technical issues across modules. Integrate Oracle Retail with external systems using APIs, RIB, or middleware. Optimize performance of batch jobs and database queries. Support data migration, ETL, and reporting requirements. Required Skills & Qualifications: 5+ years of experience in Oracle Retail Suite (RMS, RPM, ReIM, ReSA, POS). Strong understanding of retail business processes and data flows. Proficiency in PL/SQL, Java, and Oracle DB. Experience with Oracle Retail Integration Bus (RIB) and batch processing. Ability to write functional and technical specifications. Excellent communication and stakeholder management skills. Preferred Skills: Experience with Oracle Retail Cloud. Exposure to Oracle SOA Suite or MuleSoft. Knowledge of Agile/Scrum methodologies. Oracle Retail certifications.
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France