Jobs
Interviews

532 Data Manipulation Jobs - Page 3

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

8.0 - 13.0 years

3 - 7 Lacs

Chennai

Work from Office

***SAP SAC Developer - Chennai, India - Hybrid***For our global client, RED is currently looking for an SAP SAC Developer within the operations domain.The consultant is expected to start ASAP and for an initial contract running for 12+ months with an extension afterwards. The project is hybrid working so therefore, 3 days onsite is required to Chennai, India. Must have skillsOverall 8+ years of experience in SAP SAC and at least 2-3 years experience working on SAP SAC \/ Datasphere solutions as a Consultant, Analyst, Developer or relevant role.Proven experience designing and implementing solutions utilizing SAC for planning, forecasting, and KPI reporting, as well as integrating with SAP Datasphere.Advanced abilities in SQL, Python, and JavaScript to enable data manipulation and automation within both SAC and Datasphere environments.Needs to be a self-starter who can identify the challenges and overcome them; andEnglish communication is mandatory.for immediate consideration.

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

Gurugram

Work from Office

Data Engineer, Quantitative Finance (Gurgaon, India) Amethyst Partners Data Engineer, Quantitative Finance (Gurgaon, India) A leading quantitative-driven financial firm is looking for a Data Engineer to join their growing team. This is a highly impactful role where you will be responsible for building scalable, efficient, and reliable data pipelines that power core trading and research operations. Key Responsibilities Data Pipeline Development Design, develop, and maintain robust Python-based data ingestion pipelines for market data and internal sources. Build and manage a unified RPC-based data access library, compatible across research and trading systems. Own the lifecycle of new and existing datasets acquisition, ingestion, validation, and integration. Data Quality & Monitoring Implement automated validation checks for data consistency, completeness, and accuracy. Collaborate with researchers to troubleshoot data anomalies and establish data quality benchmarks. Vendor & Data Source Management Evaluate and onboard new data vendors (e.g., sentiment, factor models, fundamentals). Monitor usage and relevance of existing subscriptions to optimize costs and eliminate inefficiencies. Maintain a pipeline of exploratory and potential new data sources. Collaboration & Documentation Partner with quant researchers and software teams to integrate data into models and tools. Write comprehensive documentation for datasets, processes, and libraries to enable efficient onboarding and collaboration. Strategic Impact Contribute to the long-term evolution of the firm s data stack. Stay up-to-date with trends in financial data, Python tooling, and infrastructure to bring best practices into the team. Assist in developing proprietary data signals and custom indicators (e.g., sentiment scores). Key Requirements Bachelor s or Master s degree in Computer Science, Engineering, Data Science, or a related technical discipline from a Tier-1 institution. 1 3 years of professional experience in data engineering, software development, or a Python-centric technical role. Strong coding skills in Python, with experience in writing reusable functions, modules, and lightweight APIs. Familiarity with libraries like pandas, numpy, and tools for time-series data manipulation. Understanding of version control (Git) and collaborative development practices. Exposure to financial market data (e.g., equities, futures, derivatives) or previous experience in fintech or trading environments. EA License Number: 20C0180 | Amethyst Partners | info@amethystasiapartners.com

Posted 1 week ago

Apply

6.0 - 10.0 years

8 - 12 Lacs

Hyderabad

Work from Office

Job Description: Reltio(MDM,Java and Python )Experience: 6 to 10 years of experience working with Reltio MDM in a professional setting. Technical Skills:. Strong Understanding of Master Data Management principals and concepts. Design, configure, and manage the Reltio Data Model, including match & merge rules, survivorship rules, Validation rules.. Manage Reference Data Management (RDM), User management, UI config, handling lifecycle actions and workflow.. Develop and optimize data loading/exporting process into/from Reltio. Work with Reltio Integration Hub to ensure seamless data integration Strong proficiency in SQL for data manipulation and querying. Knowledge of Java/Python or any programming scripting language for data processing and automation.. Familiarity with Data Modelling concepts Understanding of MDM workflow configurations and role-based data governance Soft Skills: Excellent analytical and problem-solving skills with a keen attention to detail. Strong ability to communicate effectively with both technical and non-technical stakeholders. Proven ability to work independently and collaborate in a fast-paced environment. Work Location : PAN IndiaShift Timing : UK Shift

Posted 1 week ago

Apply

6.0 - 12.0 years

8 - 14 Lacs

Hyderabad

Work from Office

JD here: MDM Reltio implementation 6 to 12 years of experience in MDM and Reltio environmentTechnical Skills:. Strong Understanding of Master Data Management principals and concepts. Design, configure, and manage the Reltio Data Model, including match & merge rules, survivorship rules, Validation rules.. Manage Reference Data Management (RDM), User management, UI config, handling lifecycle actions and workflow.. Develop and optimize data loading/exporting process into/from Reltio. Work with Reltio Integration Hub to ensure seamless data integration Strong proficiency in SQL for data manipulation and querying. Knowledge of Java/Python or any programming scripting language for data processing and automation.. Familiarity with Data Modelling concepts Understanding of MDM workflow configurations and role based data governance Soft Skills: Excellent analytical and problem solving skills with a keen attention to detail. Strong ability to communicate effectively with both technical and non technical stakeholders. Proven ability to work independently and collaborate in a fast paced environment. Shift Timing : UK ShiftWork Location : Pan India

Posted 1 week ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

Bengaluru

Work from Office

: The Senior Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and building out new API integrations to support continuing increases in data volume and complexity. They will collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Responsibilities: Design, construct, install, test and maintain highly scalable data management systems & Data Pipeline. Ensure systems meet business requirements and industry practices. Build high-performance algorithms, prototypes, predictive models, and proof of concepts. Research opportunities for data acquisition and new uses for existing data. Develop data set processes for data modeling, mining and production. Integrate new data management technologies and software engineering tools into existing structures. Create custom software components and analytics applications. Install and update disaster recovery procedures. Collaborate with data architects, modelers, and IT team members on project goals. Provide senior level technical consulting to peer data engineers during data application design and development for highly complex and critical data projects. Qualifications: Bachelor's degree in computer science, Engineering, or related field, or equivalent work experience. Proven 5-8 years of experienceas a Senior Data Engineer or similar role. Experience with big data toolsHadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf RPC etc. Expert level SQL skills for data manipulation (DML) and validation (DB2). Experience with data pipeline and workflow management tools. Experience with object-oriented/object function scripting languagesPython, Java, Go langetc. Strong problem solving and analytical skills. Excellent verbal communication skills. Good interpersonal skills. Ability to provide technical leadership for the team.

Posted 1 week ago

Apply

5.0 - 8.0 years

9 - 14 Lacs

Pune

Work from Office

Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount. With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT Job TitleBig Data Engineer : The Senior Data Engineer will be responsible for designing, developing, and maintaining scalable data pipelines and building out new API integrations to support continuing increases in data volume and complexity. They will collaborate with analytics and business teams to improve data models that feed business intelligence tools, increasing data accessibility and fostering data-driven decision making across the organization. Responsibilities: Design, construct, install, test and maintain highly scalable data management systems & Data Pipeline. Ensure systems meet business requirements and industry practices. Build high-performance algorithms, prototypes, predictive models, and proof of concepts. Research opportunities for data acquisition and new uses for existing data. Develop data set processes for data modeling, mining and production. Integrate new data management technologies and software engineering tools into existing structures. Create custom software components and analytics applications. Install and update disaster recovery procedures. Collaborate with data architects, modelers, and IT team members on project goals. Provide senior level technical consulting to peer data engineers during data application design and development for highly complex and critical data projects. Qualifications: Bachelor's degree in Computer Science, Engineering, or related field, or equivalent work experience. Proven 5-8 years of experienceas a Senior Data Engineer or similar role. Experience with big data toolsPyspark, Hadoop, Spark, Kafka, Ansible, chef, Terraform, Airflow, and Protobuf RPC etc.. Expert level SQL skills for data manipulation (DML) and validation (DB2). Experience with data pipeline and workflow management tools. Experience with object-oriented/object function scripting languagesPython, Java, Go langetc. Strong problem solving and analytical skills. Excellent verbal communication skills. Good interpersonal skills. Ability to provide technical leadership for the team.

Posted 1 week ago

Apply

3.0 - 6.0 years

3 - 6 Lacs

Gurugram

Work from Office

About the Role : A global leader in technology innovation is seeking a highly skilled C++ Developer with a passion for competitive programming and logical reasoning to join our team. This critical role will focus on creating high-quality reasoning datasets to train and improve Large Language Models (LLMs). You will design structured programming challenges and reasoning tasks to enhance the problem-solving capabilities of advanced AI systems. This is an exceptional opportunity for individuals with strong problem-solving expertise to contribute to cutting-edge AI development and directly impact the future of LLMs. Job Responsibilities : - Dataset Creation : Design and create structured reasoning tasks rooted in programming challenges, specifically in C++, to effectively train LLMs. This includes defining problem statements, specifying input/output formats, and generating diverse test cases. - Problem Decomposition : Develop datasets that test and improve an LLM's ability to solve complex, multi-step problems, requiring clear and logical explanations of the solution process. This will involve breaking down problems into smaller, manageable sub-problems. - Collaboration : Collaborate closely with research scientists and engineers to ensure task objectives align with model training goals and contribute to the overall improvement of the LLM's performance. - Iteration and Refinement : Incorporate feedback from model performance analysis to iterate on and refine task designs, ensuring they effectively target areas for improvement in the LLM's reasoning and problem-solving abilities. - Quality Assurance : Maintain consistency and clarity in task descriptions, ensuring they meet high-quality standards for accuracy, completeness, and readability. This includes rigorous testing and validation of the generated datasets. - Documentation : Create and maintain clear documentation for the designed tasks, including problem descriptions, solution explanations, and any relevant metadata. Job Requirements : - Experience : At least 3 years of experience in software development, with a strong focus on C/C++ programming. - Programming Proficiency : Demonstrable expertise in C/C++ with a strong understanding of data structures and algorithms. - Analytical Skills : Excellent analytical and problem-solving skills, with the ability to break down complex problems into structured, logical steps. - Communication Skills : Proficient in English, with excellent written communication skills for crafting clear, concise, and logical explanations of programming solutions and reasoning processes. - Attention to Detail : Meticulous attention to detail in designing programming and reasoning tasks, ensuring accuracy and consistency. - Problem-Solving Prowess : A passion for problem-solving and a strong aptitude for logical reasoning. - Competitive Programming (Preferred) : A profile on platforms like LeetCode, HackerRank, Codeforces, or GitHub that demonstrates advanced problem-solving skills and competitive programming experience is a significant plus. - AI/ML Interest (Preferred) : Familiarity with or a strong interest in the field of Artificial Intelligence and Machine Learning, particularly related to LLMs, is a plus. Mandatory Skills : - C++ : 3+ years of experience Bonus Skills (Considered a Plus) : - Experience with other programming languages (e.g., Python). - Experience with data manipulation and analysis tools. - Knowledge of software testing methodologies. - Contributions to open-source projects

Posted 1 week ago

Apply

4.0 - 9.0 years

15 - 30 Lacs

Hyderabad, Chennai, Bengaluru

Hybrid

Role - Data Analyst (Pyspark/SQL) Location - Bengalore Type - Hybrid Position - Full Time We are looking a Data Analyst who has strong expertise into PySpark & SQL Roles and Responsibilities Develop expertise in SQL queries for complex data analysis and troubleshooting issues related to data extraction, manipulation, transformation, mining, processing, wrangling, reporting, modeling, classification. Desired Candidate Profile 4-9 years of experience in Data Analytics with a strong background in PySpark programming language.

Posted 1 week ago

Apply

5.0 - 7.0 years

17 - 25 Lacs

Haryana

Work from Office

About Company Founded in 2011, ReNew, is one of the largest renewable energy companies globally, with a leadership position in India. Listed on Nasdaq under the ticker RNW, ReNew develops, builds, owns, and operates utility-scale wind energy projects, utility-scale solar energy projects, utility-scale firm power projects, and distributed solar energy projects. In addition to being a major independent power producer in India, ReNew is evolving to become an end-to-end decarbonization partner providing solutions in a just and inclusive manner in the areas of clean energy, green hydrogen, value-added energy offerings through digitalisation, storage, and carbon markets that increasingly are integral to addressing climate change. With a total capacity of more than 13.4 GW (including projects in pipeline), ReNew’s solar and wind energy projects are spread across 150+ sites, with a presence spanning 18 states in India, contributing to 1.9 % of India’s power capacity. Consequently, this has helped to avoid 0.5% of India’s total carbon emissions and 1.1% India’s total power sector emissions. In the over 10 years of its operation, ReNew has generated almost 1.3 lakh jobs, directly and indirectly. ReNew has achieved market leadership in the Indian renewable energy industry against the backdrop of the Government of India’s policies to promote growth of this sector. ReNew’s current group of stockholders contains several marquee investors including CPP Investments, Abu Dhabi Investment Authority, Goldman Sachs, GEF SACEF and JERA. Its mission is to play a pivotal role in meeting India’s growing energy needs in an efficient, sustainable, and socially responsible manner. ReNew stands committed to providing clean, safe, affordable, and sustainable energy for all and has been at the forefront of leading climate action in India. Job Description As a Senior Data Scientist, you will be responsible for providing simple and intuitive solutions to the most complex problems in renewable industry. This will involve building machine learning and deep learning models which are coupled with contextual business knowledge. You will also use both management experience and data science expertise to lead analytics projects and mentor junior data scientists . You will be an integral part of conceptualizing, scoping and delivering complex analytics projects . You will do and oversee the development, testing and maintenance of analytical models which mimic business decisions. You will work with a world-class team of site managers, subject experts, data engineers and data scientists to build the next generation data-driven renewable company. Additionally, you will be responsible for continuously identifying opportunities to improve the ways of working and structuring within the analytics center of excellence. Roles and Responsibilities Implement data science solutions which may involve machine learning or data science techniques to increase and optimize outputs Assess the effectiveness and accuracy of data sources and data gathering techniques Manage analytics strategy across multiple use cases Provide analytical expertise in the process of model development, refining and implementation in a variety of analytics problems spread across a variety of domains Oversee and mentor large team of associate and junior-level data scientists, de-bottlenecking issues related to project execution Work closely with translators and business teams to develop and implement analytics solutions Collaborate with the engineering team to integrate data science solutions into production environments and deploy scalable solutions Communicate complex technical concepts and results to non-technical stakeholders in a clear and concise manner Acquire extensive knowledge of renewables industry practices and implement tailor made techniques to solve problems relevant to the same Promote a culture of continuous learning, sharing knowledge, and staying up-to-date with emerging technologies and best practices in data science Profile & Eligible Criteria Bachelor's in Computer Science, Data Science, Statistics, Mathematics, or a related field. Master's degree preferred Proven experience of 4-6 years as a Data Scientist or in a similar role, with a strong track record of delivering successful data science projects Sound theoretical and practical knowledge of working in statistical analysis, machine learning techniques, and deep learning frameworks with solid of understanding of techniques related to regression, classification, NLP, Computer Vision, Generative AI Proficiency in Python is a must. Proficiency in SQL for data manipulation is a plus Worked in developing analytics solutions in 3-4 domains Excellent communication skills to effectively manage a team and collaborate with stakeholders in cross-functional teams Communication skills: Basic level of articulation of theoretical concepts Ability to communicate with cross functional roles is a plus Teamwork: Displays ability to work both as an individual and as a self-motivated team player Has worked in large teams with an agile setup in the past

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

About Klook Klook is Asia's leading platform for experiences and travel services, dedicated to bringing the world closer together through unique and unforgettable experiences. Founded in 2014 by three passionate travelers - Ethan Lin, Eric Gnock Fah, and Bernie Xiong, Klook offers over half a million curated quality experiences in 2,700 destinations worldwide. From iconic attractions to adventurous activities like paragliding, from cultural tours to convenient local travel services, Klook aims to inspire and enable more moments of joy for travelers around the globe. Do you share our belief in the transformative power of travel Join our international community of over 1,800 employees across 30+ locations who are not only curating memorable experiences but also co-creating a world of joy within Klook. At Klook, we work hard and play hard, maintaining a high-performing culture guided by our six core values: Customer First, Push Boundaries, Critical Thinking, Build for Scale, Less is More, and Win as One. We constantly strive for excellence and believe in reaching greater heights in the ever-evolving travel industry. If you are passionate about travel and innovation, come be a part of our revolution! As a Data Scientist in the Pricing Strategy team, you will be instrumental in driving data-driven decision-making and optimizing pricing strategies. Your role will involve developing and implementing dynamic pricing models, predictive and prescriptive analysis, and contributing to revenue growth and market competitiveness. What You'll Do Dynamic Pricing Model Development: Create and implement advanced dynamic pricing models to optimize product pricing across different channels and markets. Predictive Analytics: Utilize predictive modeling techniques to forecast demand, market trends, and customer behavior for proactive pricing adjustments. Prescriptive Analysis: Use prescriptive analytics to identify optimal pricing strategies aligned with specific business objectives and constraints. Data Exploration and Analysis: Conduct thorough data exploration and analysis to extract valuable insights for informing pricing decisions. Model Evaluation and Refinement: Continuously assess and refine pricing models to ensure accuracy and effectiveness. Collaboration: Work closely with cross-functional teams such as marketing, sales, and finance to align pricing strategies with overall business goals. Stay Updated: Keep abreast of the latest advancements in data science and pricing optimization techniques. What You'll Need Master's degree or PhD in Data Science, Statistics, Computer Science, Economics, or a related field. 3-4 years of relevant experience in Data Science. Proficiency in Python or R for programming, data manipulation, analysis, and visualization using libraries like Pandas, NumPy, Matplotlib, and Seaborn. Knowledge of machine learning algorithms (e.g., regression, classification, clustering, time series analysis) and statistical modeling. Familiarity with data warehousing and cloud computing platforms (e.g., AWS, GCP, Azure) is a plus. Strong problem-solving and analytical skills. Ability to effectively communicate complex technical concepts to both technical and non-technical audiences. Passion for data-driven decision-making and a continuous learning mindset. Klook is committed to being an equal opportunity employer, welcoming talented and passionate individuals from diverse backgrounds. We believe in fostering an inclusive workplace where everyone has an equal opportunity to thrive. Join us in creating a supportive and inclusive culture where every individual belongs.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

karnataka

On-site

At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY's Financial Services Office (FSO) is a unique, industry-focused business unit that provides a broad range of integrated services leveraging deep industry experience with strong functional capability and product knowledge. The FSO practice offers integrated advisory services to financial institutions and other capital markets participants, including commercial banks, investment banks, broker-dealers, asset managers, insurance and energy trading companies, and Corporate Treasury functions of leading Fortune 500 Companies. The service offerings include market, credit, and operational risk management, regulatory advisory, quantitative advisory, technology enablement, and more. Within EY's FSO Advisory Practice, the Financial Services Risk Management (FSRM) group provides solutions to help clients identify, measure, manage, and monitor market, credit, operational, and regulatory risks associated with trading, asset-liability management, and capital markets activities. The Credit Risk (CR) team within FSRM assists clients in designing and implementing strategic and functional changes across risk management within banking book portfolios of large domestic and global financial institutions. Key Responsibilities: - Demonstrate deep technical capabilities and industry knowledge of financial products, particularly lending products. - Stay informed about market trends and demands in the financial services sector and issues faced by clients. - Monitor progress, manage risk, and communicate effectively with key stakeholders. - Mentor junior consultants and review tasks completed by them. - Work on projects involving model audits, validation, and development activities. Qualifications, Certifications, and Education: Must-have: - Postgraduate degree in accounting, finance, economics, statistics, or related field with at least 3 years of related work experience. - Understanding of climate risk models, ECL, stress testing, and regulatory requirements related to credit risk. - Knowledge of Credit Risk and Risk Analytics techniques. - Hands-on experience in data preparation, manipulation, and consolidation. - Strong documentation skills and ability to summarize key details effectively. - Proficiency in statistics, econometrics, and technical skills in Advanced Python, SAS, SQL, R, and Excel. Good-to-have: - Certifications such as FRM, CFA, PRM, SCR. - Experience in Data/Business Intelligence Reporting and knowledge of Machine Learning models. - Willingness to travel and previous project management experience. EY exists to build a better working world, creating long-term value for clients, people, and society while building trust in capital markets. EY teams across 150 countries provide trust through assurance and help clients grow, transform, and operate in various sectors like assurance, consulting, law, strategy, tax, and transactions, addressing complex issues globally.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

uttar pradesh

On-site

As a Data Science Instructor at our dynamic team, you will play a crucial role in preparing students for the evolving data science landscape. Your primary focus will be on delivering immersive and hands-on learning experiences to equip future data professionals with practical skills in SQL, Python, Machine Learning, and real-world project development. Your responsibilities will include developing and conducting engaging lessons and workshops covering core data science topics, with a notable emphasis on SQL and Python programming. You will guide students through foundational machine learning models such as linear regression, logistic regression, decision trees, and random forests, ensuring they comprehend both the theoretical concepts and practical applications. Moreover, you will be tasked with designing and facilitating hands-on projects to help students apply their knowledge effectively and build a robust portfolio of work. Your ability to explain complex technical concepts in a clear and concise manner will be pivotal in making the material accessible to a diverse group of learners. As a mentor, you will provide constructive feedback and guidance to support students in overcoming challenges and enhancing their understanding. Collaboration with our team to continuously enhance and refine our curriculum to ensure its relevance and cutting-edge nature will also be part of your role. To excel in this position, you must have proven experience as a Data Scientist, Machine Learning Engineer, or similar role, backed by a strong portfolio that demonstrates your expertise in Python and machine learning projects. Your proficiency in Python for data analysis, manipulation, and model building, along with practical experience in common machine learning models, is essential. Additionally, your exceptional ability to design and lead hands-on projects that foster practical skill development, coupled with strong communication and presentation skills, will be key in your success. Proficiency in SQL for data querying and management is also required. Preferred qualifications include familiarity with Python libraries like Streamlit, experience with Natural Language Processing (NLP) concepts, exposure to Large Language Models (LLMs) or chatbot development, and prior experience in teaching, mentoring, or training roles. If you are enthusiastic about empowering the next generation of data scientists and possess a proven track record in building and explaining data science projects, we encourage you to reach out to us at hr@ndmit.com. Join us in shaping the future of data science education!,

Posted 1 week ago

Apply

1.0 - 5.0 years

0 Lacs

karnataka

On-site

As a GenAI Data Scientist at PwC US - Acceleration Center, you will be responsible for developing and implementing machine learning models and algorithms for GenAI projects. Your role will involve collaborating with product, engineering, and domain experts to identify high-impact opportunities, designing and building GenAI and Agentic AI solutions, processing structured and unstructured data for LLM workflows, validating and evaluating models, containerizing and deploying production workloads, communicating findings via various mediums, and staying updated with GenAI advancements. You should possess a Bachelor's or Master's degree in Computer Science, Data Science, Statistics, or a related field, along with 1-2 years of hands-on experience delivering GenAI solutions and 3-5 years of deploying machine learning solutions in production environments. Proficiency in Python, experience with vector stores and search technologies, familiarity with LLM-backed agent frameworks, expertise in data preprocessing and feature engineering, competence with cloud services, solid grasp of Git workflows and CI/CD pipelines, and proficiency in data visualization are essential requirements. Additionally, having relevant certifications in GenAI tools, hands-on experience with leading agent orchestration platforms, expertise in chatbot design, practical knowledge of ML/DL frameworks, and proficiency in object-oriented programming with languages like Java, C++, or C# are considered as nice-to-have skills. The ideal candidate should possess strong problem-solving skills, a collaborative mindset, and the ability to thrive in a fast-paced environment. If you are passionate about leveraging data to drive insights and make informed business decisions, this role offers an exciting opportunity to contribute to cutting-edge GenAI projects and drive innovation in the field of data science.,

Posted 1 week ago

Apply

0.0 - 4.0 years

0 Lacs

hyderabad, telangana

On-site

The Senior Associate Consultant at Ryan plays a crucial role in providing client engagement support and coordination by assisting team members with various tasks. You will ensure that all support needs are met, both for engagement and non-engagement tasks. Your responsibilities include offering basic administrative support, being available for overtime work, and traveling as necessary to support projects at client sites. The duties outlined below are fundamental and may vary based on the specific practice area you are assigned to. Your key responsibilities align with Ryans Key Results: People: - Foster a positive team environment by organizing and prioritizing tasks from multiple team members, meeting deadlines, and seeking assistance when necessary. - Maintain a professional and positive attitude, exhibit teamwork, multitasking abilities, and adaptability to changing priorities. Client: - Provide proactive updates to the US / India liaison on work status. - Address client inquiries and requests from tax authorities. - Conduct research on clients and industries for team members. - Manage calendar appointments and deadlines to ensure accountability and observe client deadlines. - Assist the engagement team in preparing and distributing client deliverables. - Create files for clients and projects using Microsoft Excel and Access. - Handle workpapers by downloading, printing, organizing, scanning, formatting, coding, and mapping client data into databases. Value: - Prepare e-mails, memos, letters, and confirmation requests. - Collect required signatures on forms and letters. - Take accurate messages, make travel arrangements, and communicate effectively while the team is traveling. - Track and report time and expenses in detail for yourself and the Manager when necessary. - Work efficiently in a deadline-driven environment, maintaining accuracy and confidentiality. - Quick to learn new procedures, possess analytical skills, research capabilities, and problem-solving aptitude. - Follow instructions meticulously, utilize strong grammar, spelling, and proofreading skills. - Be willing to work overtime, travel independently, and assist with additional projects as assigned. Education and Experience: - A four-year college degree from an accredited institution is required. - Direct hires into this position must hold a degree in Accounting, Finance, Economics, or a relevant field with the minimum number of Accounting hours required. - Overall GPA should be at least 2.80. - Promotions from Associate Consultant require the necessary Accounting hours for entry-level Consultants of the assigned practice area. Computer Skills: - Proficiency in Microsoft Word, Access, Excel, Outlook, and Internet navigation and research is essential. Certificates and Licenses: - A valid driver's license is mandatory. Supervisory Responsibilities: - This position does not involve supervisory responsibilities. Work Environment: - You will mostly work in a standard indoor office setting. - Occasional extended periods of sitting and standing while working. - Regular interaction with employees at all levels and external vendors. - Travel independently up to 50%. - Expectation of a standard 40+ hour workweek. Ryan is an Equal Opportunity Employer, committed to diversity, inclusion, and providing equal opportunities for individuals with disabilities and veterans.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

The global startup Analytics and performance management team is seeking a motivated Analytics Specialist- Procurement and IT to fulfill the reporting and analytics requirements of Procurement and IT business functions. As an Analytics Specialist, your primary responsibility will involve creating Power BI reports and data models to meet the analytics needs of the respective business functions. You will be expected to design intuitive dashboards with compelling data storytelling elements and generate reports that enable users to easily comprehend and retain information. Furthermore, you will play a crucial role in suggesting effective UI/UX designs and supporting the team in developing Advanced Power BI reports essential for performance management. This will involve utilizing DAX and Power Query when necessary to ensure the creation of standard reporting solutions. Your proficiency in acquiring data from various sources such as flat files and XLSX, coupled with data transformation skills using Python, will aid the team in developing efficient reporting solutions. In terms of qualifications, the ideal candidate should hold a Bachelor's or Master's degree in computer science or a related field. Additionally, a minimum of 6 years of professional experience is required, with at least 5 years of expertise in Power BI dashboard creation and data visualization, along with a minimum of 2 years of experience in Python. Key Knowledge/Skills: - Proficiency in DAX, Power Query, and data modeling within Power BI - Familiarity with data visualization best practices and principles of user experience design - Working knowledge of UI/UX design to implement interactive dashboards - Ability to craft engaging narratives through visualization capabilities - Strong understanding of Procurement and IT processes - Intermediate skills in Python for data manipulation and analysis Preferred Qualifications: - Excellent communication skills, capable of presenting complex data insights to non-technical stakeholders effectively - Ability to manage, influence, negotiate, and communicate with internal business partners to address organizational capacity requirements,

Posted 1 week ago

Apply

5.0 - 9.0 years

9 - 13 Lacs

Gurugram, Bengaluru

Work from Office

Data Analyst for creating financial reports. Must know Alterxy, SQL and HANA. Create reports in HANA or Hyperion. Use Alterxy for workflow automation and strong knowledge of SQL, Ability to utilise python libraries like Panda and Numpy

Posted 1 week ago

Apply

7.0 - 12.0 years

20 - 35 Lacs

Hyderabad, Bengaluru

Hybrid

Job Role: Backend and Data Pipeline Engineer Location: Hyderabad/Bangalore(Hybrid) Job Type : Fulltime **Immediate Joiners 0-15 days** Job Summary: The Team: Were investing in technology to develop new products that help our customers drive their growth and transformation agenda. These include new data integration, advanced analytics, and modern applications that address new customer needs and are highly visible and strategic within the organization. Do you love building products on platforms at scale while leveraging cutting edge technology? Do you want to deliver innovative solutions to complex problems? If so, be part of our mighty team of engineers and play a key role in driving our business strategies. The Impact: We stand at cross-roads of innovation through Data Products to bring a competitive advantage to our business through the delivery of automotive forecasting solutions. Your work will contribute to the growth and success of our organization and provide valuable insights to our clients. Whats in it for you: We are looking for an innovative and mission-driven software\data engineer to make a significant impact by designing and developing AWS cloud native solutions that enables analysts to forecast long and short-term trends in the automotive industry. This role requires cutting edge data and cloud native technical expertise as well as the ability to work independently in a fast-paced, collaborative, and dynamic work environment. Responsibilities: Design, develop, and maintain scalable data pipelines including complex algorithms Build and maintain UI backend services using Python or C# or similar, ensuring responsiveness and high performance Ensure data quality and integrity through robust validation processes Strong understanding of data integration and data modeling concepts Lead data integration projects and mentor junior engineers Collaborate with cross-functional teams to gather data requirements Collaborate with data scientists and analysts to optimize data flow and storage for advanced analytics Take ownership of the modules you work on, deliver on time and with quality, ensure software development best practices Utilize Redis for caching and data storage solutions to enhance application performance. What Were Looking For : Bachelors degree in computer science, or a related field. Strong analytical and problem-solving skills. 7+ years of experience in Data Engineering/Advanced Analytics Proficiency in Python and experience with Flask for backend development. Strong knowledge of object-oriented programming. AWS Proficiency is a big plus: ECR, Containers

Posted 1 week ago

Apply

5.0 - 10.0 years

16 - 20 Lacs

Bengaluru

Work from Office

Project description We are seeking a highly skilled Data Modelling Expert with deep experience in the Avaloq Core Banking platform to join our technology team. The ideal candidate will be responsible for designing, maintaining, and optimizing complex data models that support our banking products, client data, and regulatory reporting needs. This role requires a blend of domain expertise in banking and wealth management, strong data architecture capabilities, and hands-on experience working with the Avaloq platform. Responsibilities Design, implement, and maintain conceptual, logical, and physical data models within the Avaloq Core Banking system. Collaborate with business analysts, product owners, and Avaloq parameterisation teams to translate business requirements into robust data models. Ensure alignment of data models with Avaloq's object model and industry best practices. Perform data profiling, quality checks, and lineage tracing to support regulatory and compliance requirements (e.g., Basel III, MiFID II, ESG). Support integration of Avaloq data with downstream systems (e.g., CRM, data warehouses, reporting platforms). Provide expert input on data governance, metadata management, and model documentation. Contribute to change requests, upgrades, and data migration projects involving Avaloq. Collaborate with cross-functional teams to drive data consistency, reusability, and scalability. Review and validate existing data models, identify gaps or optimisation opportunities. Ensure data models meet performance, security, and privacy requirements. Skills Must have Proven experience (5+ years) in data modelling or data architecture, preferably within financial services. 3+ years of hands-on experience with Avaloq Core Banking Platform, especially its data structures and object model. Strong understanding of relational databases and data modelling tools (e.g., ER/Studio, ERwin, or similar). Proficient in SQL and data manipulation in Avaloq environments. Knowledge of banking products, client lifecycle data, and regulatory data requirements. Familiarity with data governance, data quality, and master data management concepts. Experience working in Agile or hybrid project delivery environments. Nice to have Exposure to Avaloq Scripting or parameterisation is a strong plus. Experience integrating Avaloq with data lakes, BI/reporting tools, or regulatory platforms. Understanding of data privacy regulations (GDPR, FINMA, etc.). Certification in Avaloq or relevant financial data management domains is advantageous.

Posted 1 week ago

Apply

15.0 - 20.0 years

4 - 8 Lacs

Bengaluru

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Snowflake Data Warehouse Good to have skills : Amazon Web Services (AWS), Data Warehouse ETL Testing, Oracle Procedural Language Extensions to SQL (PLSQL)Minimum 7.5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to effectively migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and provide innovative solutions to enhance data accessibility and usability. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior team members to enhance their skills and knowledge in data engineering.- Continuously evaluate and improve data processes to optimize performance and efficiency. Professional & Technical Skills: - Must To Have Skills: Proficiency in Snowflake Data Warehouse.- Good To Have Skills: Experience with Oracle Procedural Language Extensions to SQL (PLSQL), Amazon Web Services (AWS), Data Warehouse ETL Testing.- Strong understanding of data modeling and database design principles.- Experience with data integration tools and techniques.- Proficient in writing complex SQL queries for data manipulation and analysis. Additional Information:- The candidate should have minimum 7.5 years of experience in Snowflake Data Warehouse.- This position is based at our Bengaluru office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 8.0 years

4 - 8 Lacs

Hyderabad

Work from Office

Project Role : Data Engineer Project Role Description : Design, develop and maintain data solutions for data generation, collection, and processing. Create data pipelines, ensure data quality, and implement ETL (extract, transform and load) processes to migrate and deploy data across systems. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As a Data Engineer, you will design, develop, and maintain data solutions that facilitate data generation, collection, and processing. Your typical day will involve creating data pipelines, ensuring data quality, and implementing ETL processes to migrate and deploy data across various systems. You will collaborate with cross-functional teams to understand data requirements and contribute to the overall data strategy of the organization, ensuring that data is accessible, reliable, and actionable for stakeholders. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Assist in the design and implementation of data architecture and data models.- Monitor and optimize data pipelines for performance and reliability. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Strong understanding of data integration techniques and ETL processes.- Experience with data quality frameworks and data governance practices.- Familiarity with cloud platforms and services related to data storage and processing.- Knowledge of programming languages such as Python or Scala for data manipulation. Additional Information:- The candidate should have minimum 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A 15 years full time education is required. Qualification 15 years full time education

Posted 1 week ago

Apply

3.0 - 5.0 years

15 - 25 Lacs

Noida

Work from Office

We are looking for an experienced Data Engineer with strong expertise in Databricks and Azure Data Factory (ADF) to design, build, and manage scalable data pipelines and integration solutions. The ideal candidate will have a solid background in big data technologies, cloud platforms, and data processing frameworks to support enterprise-level data transformation and analytics initiatives. Roles and Responsibilities Design, develop, and maintain robust data pipelines using Azure Data Factory and Databricks . Build and optimize data flows and transformations for structured and unstructured data. Develop scalable ETL/ELT processes to extract data from various sources including SQL, APIs, and flat files. Implement data quality checks, error handling, and performance tuning of data pipelines. Collaborate with data scientists, analysts, and business stakeholders to understand data requirements. Work with Azure services such as Azure Data Lake Storage (ADLS) , Azure Synapse Analytics , and Azure SQL . Participate in code reviews, version control, and CI/CD processes. Ensure data security, privacy, and compliance with governance standards. Strong hands-on experience with Azure Data Factory and Azure Databricks (Spark-based development). Proficiency in Python , SQL , and PySpark for data manipulation. Experience with Delta Lake , data versioning , and streaming/batch data processing . Working knowledge of Azure services such as ADLS, Azure Blob Storage, and Azure Key Vault. Familiarity with DevOps , Git , and CI/CD pipelines in data engineering workflows. Strong understanding of data modeling, data warehousing, and performance tuning. Excellent analytical, communication, and problem-solving skills.

Posted 1 week ago

Apply

1.0 - 3.0 years

3 - 5 Lacs

Pune

Work from Office

Master Data Management Role at Pune Any Grad/PG with 2years Exp in Data Validation, Data Manipulation, Data Cleaning, Data Analysis Excellent Communication ONLY IMMEDIATE JOINERS Salary up-to 5LPA Call-Rukhsar-9899875055, Roshan- 9899078782

Posted 1 week ago

Apply

3.0 - 6.0 years

20 - 27 Lacs

Bengaluru

Remote

UI Path Automation Development: Collaborate with cross-functional teams to identify automation possibilities within identity and access management processes. Design, develop, test, and deploy UI Path automation solutions to streamline and automate manual IDAM tasks. Create optimized workflows using UI Path Studio and Orchestrator. SAP GRC Integration: Utilize expertise in SAP GRC to seamlessly integrate UI Path automation solutions into existing SAP GRC processes. Ensure alignment with SAP GRC standards, policies, and procedures while developing UI Path-based automation. Requirements Gathering: Work closely with business stakeholders, IDAM experts, and IT teams to understand and document requirements for automation projects. Translate business requirements into technical specifications and design documents for UI Path automation solutions integrated with SAP GRC. Process Enhancement: Analyze current IDAM processes and identify opportunities for optimization and automation, focusing on SAP GRC-related aspects. Provide recommendations for process improvements and automation strategies, enhancing the IDAM lifecycle efficiency. Testing and Quality Assurance: Develop comprehensive testing plans and conduct rigorous testing of UI Path automation workflows, considering SAP GRC interactions. Address any issues during development and testing phases to ensure reliable and accurate automation. Documentation and Communication: Maintain detailed documentation of automation solutions, encompassing design blueprints, architecture diagrams, processes, and troubleshooting guidelines.

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

chennai, tamil nadu

On-site

As a Data Engineer, you will be responsible for designing, building, and maintaining scalable ETL pipelines using Java and SQL-based frameworks. Your role involves extracting data from various structured and unstructured sources, transforming it into formats suitable for analytics and reporting, and collaborating with data scientists, analysts, and business stakeholders to gather data requirements and optimize data delivery. Additionally, you will develop and maintain data models, databases, and data integration solutions, while monitoring data pipelines and troubleshooting data issues to ensure data quality and integrity. Your expertise in Java for backend/ETL development and proficiency in SQL for data manipulation, querying, and performance tuning will be crucial in this role. You should have hands-on experience with ETL tools such as Apache NiFi, Talend, Informatica, or custom-built ETL pipelines, along with familiarity with relational databases like PostgreSQL, MySQL, Oracle, and data warehousing concepts. Experience with version control systems like Git is also required. Furthermore, you will be responsible for optimizing data flow and pipeline architecture for performance and scalability, documenting data flow diagrams, ETL processes, and technical specifications, and ensuring adherence to security, governance, and compliance standards related to data. To qualify for this position, you should hold a Bachelor's degree in computer science, Information Systems, Engineering, or a related field, along with at least 5 years of professional experience as a Data Engineer or in a similar role. Your strong technical skills and practical experience in data engineering will be essential in successfully fulfilling the responsibilities of this role.,

Posted 1 week ago

Apply

6.0 - 10.0 years

15 - 19 Lacs

Gurugram

Hybrid

Job Details: Role : Tableau Developer Experience: 5-10 Years Location: Gurugram Notice Period: Immediate to 30 days Education and Experience Expertise in visual analytics and design principles 5+ years of experience in: Tableau development experience and designing dashboards / decision enablement tools Tableau Desktop and Server platforms SQL Programming ETL Developing rich portfolio of design uses cases demonstrating excellent user experience Working in an agile development environment including rapid prototyping during sprints Competencies: Proficiency in Tableau Desktop and Server platforms End-to-end testing of Tableau dashboards for data accuracy and feature functionality Visual design expertise and understanding of dashboard best practices Advanced Tableau skills, including complex calculations, LOD expressions, and action filters SQL query writing experience Experience in optimizing Tableau Server dashboard performance Ability to design custom landing pages for enhanced user experience on Tableau Server lf interested, please send your updated CV to divya@beanhr.com

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies