Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
4 - 5 Lacs
Chennai
On-site
Mandatory Skills: 8-10 years of exp. Strong proficiency in Python, SQL and experience with data processing libraries (e.g., Pandas, PySpark). Familiarity with Generative AI frameworks like LangChain, LangGraph, or similar tools. Experience integrating APIs from pre-trained AI models (e.g., OpenAI, Cohere, Hugging Face). Solid understanding of data structures, algorithms, and distributed systems. Experience with vector databases (e.g., Pinecone, Postgres). Familiarity with prompt engineering and chaining AI workflows. Understanding of MLOps practices for deploying and monitoring AI applications. Strong problem-solving skills and ability to work in a collaborative environment. Good to have: Experience with Streamlit to build application front-end. Job Description We are looking for an experienced Python Developer with expertise in Spark, SQL, data processing, and building Generative AI applications. The ideal candidate will focus on leveraging existing AI models and frameworks (e.g., LangChain, LangGraph) to create innovative, data-driven solutions. This role does not involve designing new AI models but rather integrating and utilizing pre-trained models to solve real-world problems. Key Responsibilities: Develop and deploy Generative AI applications using Python and frameworks like LangChain or LangGraph. Work with large-scale data processing frameworks like Apache Spark, SQL to prepare and manage data pipelines. Integrate pre-trained AI models (e.g., OpenAI, Hugging Face, Llama) into scalable applications. Understands ML - NLP concepts and algorithms with an exposure to Scikit-learn Pytorch. Collaborate with data engineers and product teams to design AI-driven solutions. Optimize application performance and ensure scalability in production environments. About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
Andhra Pradesh
On-site
ABOUT EVERNORTH: Evernorth℠ exists to elevate health for all, because we believe health is the starting point for human potential and progress. As champions for affordable, predictable and simple health care, we solve the problems others don’t, won’t or can’t. Our innovation hub in India will allow us to work with the right talent, expand our global footprint, improve our competitive stance, and better deliver on our promises to stakeholders. We are passionate about making healthcare better by delivering world-class solutions that make a real difference. We are always looking upward. And that starts with finding the right talent to help us get there. Position Overview Excited to grow your career? This position’s primary responsibility will be to translate software requirements into functions using Mainframe , ETL , Data Engineering with expertise in Databricks and Database technologies. This position offers the opportunity to work on modernizing legacy systems, contribute to cloud infrastructure automation, and support production systems in a fast-paced, agile environment. You will work across multiple teams and technologies to ensure reliable, high-performance data solutions that align with business goals. As a Mainframe & ETL Engineer, you will be responsible for the end-to-end development and support of data processing solutions using tools such as Talend, Ab Initio, AWS Glue, and PySpark, with significant work on Databricks and modern cloud data platforms. You will support infrastructure provisioning using Terraform, assist in modernizing legacy systems including mainframe migration, and contribute to performance tuning of complex SQL queries across multiple database platforms including Teradata, Oracle, Postgres, and DB2. You will also be involved in CI/CD practices Responsibilities Support, maintain and participate in the development of software utilizing technologies such as COBOL, DB2, CICS and JCL. Support, maintain and participate in the ETL development of software utilizing technologies such as Talend, Ab-Initio, Python, PySpark using Databricks. Work with Databricks to design and manage scalable data processing solutions. Implement and support data integration workflows across cloud (AWS) and on-premises environments. Support cloud infrastructure deployment and management using Terraform. Participate in the modernization of legacy systems, including mainframe migration. Perform complex SQL queries and performance tuning on large datasets. Contribute to CI/CD pipelines, version control, and infrastructure automation. Provide expertise, tools, and assistance to operations, development, and support teams for critical production issues and maintenance Troubleshoot production issues, diagnose the problem, and implement a solution - First line of defense in finding the root cause Work cross-functionally with the support team, development team and business team to efficiently address customer issues. Active member of high-performance software development and support team in an agile environment Engaged in fostering and improving organizational culture. Qualifications Required Skills: Strong analytical and technical skills. Proficiency in Databricks – including notebook development, Delta Lake, and Spark-based process. Experience with mainframe modernization or migrating legacy systems to modern data platforms. Strong programming skills, particularly in PySpark for data processing. Familiarity with data warehousing concepts and cloud-native architecture. Solid understanding of Terraform for managing infrastructure as code on AWS. Familiarity with CI/CD practices and tools (e.g., Git, Jenkins). Strong SQL knowledge on OLAP DB platforms (Teradata, Snowflake) and OLTP DB platforms (Oracle, DB2, Postgres, SingleStore). Strong experience with Teradata SQL and Utilities Strong experience with Oracle, Postgres and DB2 SQL and Utilities Develop high quality database solutions Ability to do extensive analysis on complex SQL processes and design skills Ability to analyze existing SQL queries for performance improvements Experience in software development phases including design, configuration, testing, debugging, implementation, and support of large-scale, business centric and process-based applications Proven experience working with diverse teams of technical architects, business users and IT areas on all phases of the software development life cycle. Exceptional analytical and problem-solving skills Structured, methodical approach to systems development and troubleshooting Ability to ramp up fast on a system architecture Experience in designing and developing process-based solutions or BPM (business process management) Strong written and verbal communication skills with the ability to interact with all levels of the organization. Strong interpersonal/relationship management skills. Strong time and project management skills. Familiarity with agile methodology including SCRUM team leadership. Familiarity with modern delivery practices such as continuous integration, behavior/test driven development, and specification by example. Desire to work in application support space Passion for learning and desire to explore all areas of IT. Required Experience & Education: Minimum of 8-12 years of experience in application development role. Bachelor’s degree equivalent in Information Technology, Business Information Systems, Technology Management, or related field of study. Location & Hours of Work: Hyderabad and Hybrid (13:00 AM IST to 10:00 PM IST) Equal Opportunity Statement Evernorth is an Equal Opportunity Employer actively encouraging and supporting organization-wide involvement of staff in diversity, equity, and inclusion efforts to educate, inform and advance both internal practices and external work with diverse client populations. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh
On-site
Experience in building Pyspark process. Proficient in understanding distributed computing principles. Experience in managing Hadoop cluster with all services. Experience with Nosql Databases and Messaging systems like Kafka. Designing building installing configuring and supporting Hadoop Perform analysis of vast data stores. Good understanding of cloud technology. Must have strong technical experience in Design Mapping specifications HLD LLD. Must have the ability to relate to both business and technical members of the team and possess excellent communication skills. Leverage internal tools and SDKs, utilize AWS services such as S3, Athena, and Glue, and integrate with our internal Archival Service Platform for efficient data purging. Lead the integration efforts with the internal Archival Service Platform for seamless data purging and lifecycle management. Collaborate with the data engineering team to continuously improve data integration pipelines, ensuring adaptability to evolving business needs. Develop and maintain data platforms using Pyspark Work with AWS and Big Data, design and implement data pipelines, and ensure data quality and integrity Collaborate with crossfunctional teams to understand data requirements and design solutions that meet business needs Implement and manage agents for monitoring, logging, and automation within AWS environments Handling migration from PySpark to AWS About Virtusa Teamwork, quality of life, professional and personal development: values that Virtusa is proud to embody. When you join us, you join a team of 27,000 people globally that cares about your growth — one that seeks to provide you with exciting projects, opportunities and work with state of the art technologies throughout your career with us. Great minds, great potential: it all comes together at Virtusa. We value collaboration and the team environment of our company, and seek to provide great minds with a dynamic place to nurture new ideas and foster excellence. Virtusa was founded on principles of equal opportunity for all, and so does not discriminate on the basis of race, religion, color, sex, gender identity, sexual orientation, age, non-disqualifying physical or mental disability, national origin, veteran status or any other basis covered by appropriate law. All employment is decided on the basis of qualifications, merit, and business need.
Posted 1 week ago
6.0 - 11.0 years
15 - 22 Lacs
Bengaluru
Work from Office
Dear Candidate, Hope you are doing well. Greeting from NAM Info INC. NAM Info Inc. is a technology-forward talent management organization dedicated to bridging the gap between industry leaders and exceptional human resources. They pride themselves on delivering quality candidates, deep industry coverage, and knowledge-based training for consultants. Their commitment to long-term partnerships, rooted in ethical practices and trust, positions them as a preferred partner for many industries. Learn more about their vision, achievements, and services on their website at www.nam-it.com. We have an open position for Data Engineer role with our company for Bangalore, Pune and Mumbai location. Job Description Position: Sr / Lead Data Engineer Location: Bangalore, Pune and Mumbai Experience: 5 + years Required Skills: Azure, Data warehouse, Python, Spark, PySpark, Snowflake / Databricks, Any RDBMS, Any ETL Tool, SQL, Unix Scripting, GitHub Strong experience in Azure / AWS / GCP Permanent with NAM Info Pvt Ltd Work Location: Bangalore, Pune and Mumbai Working time: 12 PM to 9 PM or 2 PM to 11 PM 5 Days work from office, Monday to Friday L1 interview virtual, L2 face to face at Banashankari office (for Bangalore candidate) Notice period immediate to 15 days If you are fine with the above job details then please share your resume to ananya.das@nam-it.com Regards, Recruitment Team NAM Info INC
Posted 1 week ago
3.0 - 7.0 years
0 - 2 Lacs
Chennai, Coimbatore, Bengaluru
Work from Office
Required Skill Set Talend: Hands-on experience with Talend Studio and Talend Management Console (TMC) Strong understanding of Joblets, PreJobs, PostJobs, SubJobs , and the overall Talend job design flow Proficiency in Talend components such as S3, Redshift, tDBInput, tMap , and Java-based components PySpark: Solid knowledge of PySpark Ability to analyze, compare, and validate migrated PySpark code against Talend job definitions to ensure accurate migration Additional Skills: AWS Ecosystem: S3, Glue, CloudWatch, SSM, IAM, etc. Databases: Redshift, Aurora, Teradata
Posted 1 week ago
0 years
0 Lacs
Andhra Pradesh, India
On-site
We are seeking a highly skilled and motivated Big Data Engineer to join our data engineering team. The ideal candidate will have hands-on experience with Hadoop ecosystem, Apache Spark, and programming expertise in Python (PySpark), Scala, and Java. You will be responsible for designing, developing, and optimizing scalable data pipelines and big data solutions to support analytics and business intelligence initiatives.
Posted 1 week ago
6.0 - 10.0 years
15 - 25 Lacs
Pune
Work from Office
Were Lear for You Lear, a global automotive technology leader in Seating and E-Systems, is Making every drive better by delivering intelligent in-vehicle experiences for customers around the world. With over 100 years of experience, Lear has earned a legacy of operational excellence while building its future on innovation. Our talented team is committed to creating products that ensure the comfort, well-being, convenience, and safety of consumers. Working together, we are Making every drive better. To know more about Lear please visit our career site: www.lear.com Job Description Job Title: Lead Data Engineer Function: Data Engineer Location: Bhosari, Pune Position Focus: As a Lead Data Engineer at Lear, you will take a leadership role in designing, building, and maintaining robust data pipelines within the Foundry platform. Your expertise will drive the seamless integration of data and analytics, ensuring high-quality datasets and supporting critical decision-making processes. If youre passionate about data engineering and have a track record of excellence, this role is for you! Education: Bachelors or masters degree in computer science, Engineering, or a related field. Experience: Minimum 5 years of experience in data engineering, ETL, and data integration. Proficiency in Python and libraries like Pyspark, Pandas, Numpy. Familiarity with big data technologies (e.g., Spark, Hadoop, Kafka). Excellent problem-solving skills and attention to detail. Effective communication and leadership abilities. Job Specific Comments: Manage Execution of Data-Focused Projects: As a senior member of the LEAR foundry team, support in designing, building and maintaining data-focused projects using Lears data analytics and application platforms. Participate in projects from conception to root cause analytics and solution deployment. Understand program and product delivery phases, contributing expert analysis across the lifecycle. Ensure Project deliverables are met as per agreed timeline. Tools and Technologies: Utilize key tools, including: Pipeline Builder: Author data pipelines using a visual interface. Code Repositories: Manage code for data pipeline development. Data Lineage: Visualize end-to-end data flows. Leverage programmatic health checks to ensure pipeline durability. Work with both new and legacy technologies to integrate separate data feeds and transform them into new scalable datasets. Mentor junior data engineers on best practices. Data Pipeline Architecture and Development: Lead the design and implementation of complex data pipelines. Collaborate with cross-functional teams to ensure scalability, reliability, and efficiency and utilize Git concepts for version control and collaborative development. Optimize data ingestion, transformation, and enrichment processes. Big Data, Dataset Creation and Maintenance: Utilize pipeline or code repository to transform big data into manageable datasets and produce high-quality datasets that meet the organizations needs. Implement optimum build time to ensure effective utilization of resource. High-Quality Dataset Production: Produce and maintain datasets that meet organizational needs. Optimize the size and build scheduled of datasets to reflect the latest information. Implement data quality health checks and validation. Collaboration and Leadership: Work closely with data scientists, analysts, and operational teams. Provide technical guidance and foster a collaborative environment. Champion transparency and effective decision-making. Continuous Improvement: Stay abreast of industry trends and emerging technologies. Enhance pipeline performance, reliability, and maintainability. Contribute to the evolution of Foundrys data engineering capabilities. Compliance and data security: Ensure documentation and procedures align with internal practices (ITPM) and Sarbanes Oxley requirements, continuously improving them. Quality Assurance & Optimization: Optimize data pipelines and their impact on resource utilization of downstream processes. Continuously test and improve data pipeline performance and reliability. Optimize system performance for all deployed resources. analysis and to provide adequate explanation for the monthly, quarterly and yearly analysis. Oversees all accounting procedures and systems and internal controls used in the company. Supports the preparation of budgets and financial reports, including income statements, balance sheets, cash flow analysis, tax returns and reports for Government regulatory agencies. Motivates the immediate reporting staff for better performance and effective service and encourage team spirit. Coordinates with the senior and junior management of other departments as well, as every department in the organization is directly or indirectly associated with the finance department.
Posted 1 week ago
7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Department: Engineering Employment Type: Full Time Location: India Description Shape the Future of Work with Eptura At Eptura, we're not just another tech company—we're a global leader transforming the way people, workplaces, and assets connect. Our innovative worktech solutions empower 25 million users across 115 countries to thrive in a digitally connected world. Trusted by 45% of Fortune 500 companies, we're redefining workplace innovation and driving success for organizations around the globe. Job Description We are seeking a Data Lead – Data Engineering to spearhead the design, development, and optimization of complex data pipelines and ETL processes . This role requires deep expertise in data modeling, cloud platforms, and automation to ensure high-quality, scalable solutions . You will collaborate closely with stakeholders, engineers, and business teams to drive data-driven decision-making across our organization. Responsibilities Work with stakeholders to understand data requirements and architect end-to-end ETL solutions. Design and maintain data models, including schema design and optimization. Develop and automate data pipelines to ensure quality, consistency, and efficiency. Lead the architecture and delivery of key modules within data platforms. Build and refine complex data models in Power BI, simplifying data structures with dimensions and hierarchies. Write clean, scalable code using Python, Scala, and PySpark (must-have skills). Test, deploy, and continuously optimize applications and systems. Lead, mentor, and develop a high-performing data engineering team, fostering a culture of collaboration, innovation, and continuous improvement while ensuring alignment with business objectives Mentor team members and participate in engineering hackathons to drive innovation. About You 7+ years of experience in Data Engineering, with at least 2 years in a leadership role. Strong expertise in Python, PySpark, and SQL for data processing and transformation. Hands-on experience with Azure cloud computing, including Azure Data Factory and Databricks. Proficiency in Analytics/Visualization tools: Power BI, Looker, Tableau, IBM Cognos. Strong understanding of data modeling, including dimensions and hierarchy structures. Experience working with Agile methodologies and DevOps practices (GitLab, GitHub). Excellent communication and problem-solving skills in cross-functional environments. Ability to reduce added cost, complexity, and security risks with scalable analytics solutions. Nice To Have Experience working with NoSQL databases (Cosmos DB, MongoDB). Familiarity with AutoCAD and building systems for advanced data visualization. Knowledge of identity and security protocols, such as SAML, SCIM, and FedRAMP compliance. Benefits Health insurance fully paid–Spouse, children, and Parents Accident insurance fully paid Flexible working allowance 25 days holidays 7 paid sick days 10 public holidays Employee Assistance Program Eptura Information Follow us on Twitter | LinkedIn | Facebook | YouTube Eptura is an Equal Opportunity Employer. At Eptura we promote our flexible workspace environment, free from discrimination. We believe that diversity of experience, perspective, and background leads to a better environment for all our people and a better product for our customers. Everyone is welcome at Eptura, no matter where you are from, and the more diverse we are, the more unified we will be in ensuring respectful connections all around the world.
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Job Title: Senior Data Engineer GCL: D1 Introduction to role At AstraZeneca, we believe in more than just making life-changing medicines; we believe in a future where discovery is defined by bold, dynamic, and visionary individuals. As a Data Engineer with ML Ops, Data Engineering, and Cloud Dev Ops experience, you’ll be at the forefront of a revolution in drug discovery, harnessing the power of remarkable technology and advanced AI capabilities. Work collaboratively in our dynamic, globally distributed team to design and implement ML models on cloud-native platforms, accelerating scientific breakthroughs like never before. Accountabilities Architect of Solutions: Lead the design, development, and enhancement of scalable ML Ops/Data pipelines and Data Products on cloud-native platforms. Technical Expertise: Demonstrate your expertise in Python, Pyspark, Docker, Kubernetes, and AWS ecosystems to deliver exceptional solutions. Collaborative Spirit: Work hand-in-hand with global and diverse Agile teams, from data to design, to overcome technical data challenges. Innovate & Inspire: Stay ahead of the curve by integrating the latest industry trends and innovations into your work such as GenAI. Essential Skills/Experience A proactive mindset and enthusiasm for Agile environments. Strong hands-on experience with cloud providers and services. Experience in performance tuning SQL and ML Ops data pipelines. Extensive experience in troubleshooting data issues, analyzing end-to-end data pipelines, and working with users in resolving issues. Masterful debugging and testing skills to ensure excellence in execution. Inspiring communication abilities that elevate team collaboration. Experience of structured, semi-structured (XML, JSON), and unstructured data handling including extraction and ingestion via web-scraping and FTP/SFTP. Production experience delivering CI/CD pipelines (Github, Jenkins, DataOps.Live). Cloud DevOps Engineer who can develop, test, and maintain CICD Pipeline using Terraform, cloud formation. Remain up to date with the latest technologies, like GenAI / AI platforms and FAIR scoring to improve outcomes. When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. AstraZeneca is a place where your work has a direct impact on patients, transforming our ability to develop life-changing medicines. We empower the business to perform at its peak by combining cutting-edge science with leading digital technology platforms and data. Join us at a crucial stage of our journey in becoming a digital and data-led enterprise. Here you can innovate, take ownership, and explore new solutions in a dynamic environment that values diversity and inclusivity. Ready to make a difference? Apply now! Date Posted 16-Jun-2025 Closing Date 30-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 week ago
8.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com Job Description We are seeking a seasoned Data Engineering Manager with 8+ years of experience to lead and grow our data engineering capabilities. This role demands strong hands-on expertise in Python, SQL, Spark , and advanced proficiency in AWS and Databricks . As a technical leader, you will be responsible for architecting and optimizing scalable data solutions that enable analytics, data science, and business intelligence across the organization. Key Responsibilities Lead the design, development, and optimization of scalable and secure data pipelines using AWS services such as Glue, S3, Lambda, EMR, and Databricks Notebooks, Jobs, and Workflows. Oversee the development and maintenance of data lakes on AWS Databricks, ensuring performance and scalability. Build and manage robust ETL/ELT workflows using Python and SQL, handling both structured and semi-structured data. Implement distributed data processing solutions using Apache Spark/PySpark for large-scale data transformation. Collaborate with cross-functional teams including data scientists, analysts, and product managers to ensure data is accurate, accessible, and well-structured. Enforce best practices for data quality, governance, security, and compliance across the entire data ecosystem. Monitor system performance, troubleshoot issues, and drive continuous improvements in data infrastructure. Conduct code reviews, define coding standards, and promote engineering excellence across the team. Mentor and guide junior data engineers, fostering a culture of technical growth and innovation. Qualifications Requirements 8+ years of experience in data engineering with proven leadership in managing data projects and teams. Expertise in Python, SQL, Spark (PySpark), and experience with AWS and Databricks in production environments. Strong understanding of modern data architecture, distributed systems, and cloud-native solutions. Excellent problem-solving, communication, and collaboration skills. Prior experience mentoring team members and contributing to strategic technical decisions is highly desirable.
Posted 1 week ago
6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About the Role: We are seeking a results-driven and detail-oriented data analyst to support data-driven decision-making within banking risk operations. This role involves working closely with stakeholders to provide actionable insights, enhance strategies, and drive operational efficiencies using tools such as SQL, Python, and advanced analytics. Key Responsibilities Analyze large volumes of data to identify trends, patterns, and performance drivers. Collaborate with different teams to support and influence decision-making processes. Perform root cause analysis and recommend improvements to optimize processes Design and track key KPIs Ensure data integrity and accuracy across reporting tools and business metrics. Translate complex analytical findings into business-friendly insights and decision making Develop and automate dashboards and reports using Power BI/Tableau to provide clear, actionable insights to operations and management teams. Required Skills & Qualifications: Education: Bachelor's degree in Engineering, Mathematics, Statistics, Finance, Economics, or a related field. Master's degree is a plus. Experience: 4–6 years of hands-on experience in a data or business analytics role, preferably within the BFSI domain Technical Skills: Strong proficiency in SQL & Python Solid understanding of analytical techniques and problem-solving skills. Business Acumen: Understanding of Banking & financial Sector (Preferred) Bonus Skills : PySpark, Machine Learning, Tableau/Power BI
Posted 1 week ago
4.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About The Role We are seeking a results-driven and detail-oriented data analyst to support data-driven decision-making within banking risk operations. This role involves working closely with stakeholders to provide actionable insights, enhance strategies, and drive operational efficiencies using tools such as SQL, Python, and advanced analytics. Key Responsibilities Analyze large volumes of data to identify trends, patterns, and performance drivers. Collaborate with different teams to support and influence decision-making processes. Perform root cause analysis and recommend improvements to optimize processes Design and track key KPIs Ensure data integrity and accuracy across reporting tools and business metrics. Translate complex analytical findings into business-friendly insights and decision making Develop and automate dashboards and reports using Power BI/Tableau to provide clear, actionable insights to operations and management teams. Required Skills & Qualifications Education: Bachelor's degree in Engineering, Mathematics, Statistics, Finance, Economics, or a related field. Master's degree is a plus. Experience: 2–4 years of hands-on experience in a data or business analytics role, preferably within the BFSI domain Technical Skills: Strong proficiency in SQL & Python Solid understanding of analytical techniques and problem-solving skills. Business Acumen: Understanding of Banking & financial Sector (Preferred) Bonus Skills : PySpark, Machine Learning, Tableau/Power BI
Posted 1 week ago
3.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
About the Role: We are seeking a results-driven and detail-oriented data analyst to support data-driven decision-making within banking risk operations. This role involves working closely with stakeholders to provide actionable insights, enhance strategies, and drive operational efficiencies using tools such as SQL, Python, and advanced analytics. Key Responsibilities Analyze large volumes of data to identify trends, patterns, and performance drivers. Collaborate with different teams to support and influence decision-making processes. Perform root cause analysis and recommend improvements to optimize processes Design and track key KPIs Ensure data integrity and accuracy across reporting tools and business metrics. Translate complex analytical findings into business-friendly insights and decision making Develop and automate dashboards and reports using Power BI/Tableau to provide clear, actionable insights to operations and management teams. Required Skills & Qualifications: Education: Bachelor's degree in Engineering, Mathematics, Statistics, Finance, Economics, or a related field. Master's degree is a plus. Experience: 1–3 years of hands-on experience in a data or business analytics role, preferably within the BFSI domain Technical Skills: Strong proficiency in SQL & Python Solid understanding of analytical techniques and problem-solving skills. Business Acumen: Understanding of Banking & financial Sector (Preferred) Bonus Skills : PySpark, Machine Learning, Tableau/Power BI
Posted 1 week ago
4.0 - 8.0 years
0 Lacs
karnataka
On-site
You are invited to join our team as a Mid-Level Data Engineer Technical Consultant with 4+ years of experience. As a part of our diverse and inclusive organization, you will be based in Bangalore, KA, working full-time in a permanent position during the general shift from Monday to Friday. In this role, you will be expected to possess strong written and oral communication skills, particularly in email correspondence. Your experience in working with Application Development teams will be invaluable, along with your ability to analyze and solve problems effectively. Proficiency in Microsoft tools such as Outlook, Excel, and Word is essential for this position. As a Data Engineer Technical Consultant, you must have at least 4 years of hands-on experience in development. Your expertise should include working with Snowflake and Pyspark, writing SQL queries, utilizing Airflow, and developing in Python. Experience with DBT and integration programs will be advantageous, as well as familiarity with Excel for data analysis and Unix Scripting language. Your responsibilities will encompass a good understanding of data warehousing and practical work experience in this field. You will be accountable for various tasks including understanding requirements, coding, unit testing, integration testing, performance testing, UAT, and Hypercare Support. Collaboration with cross-functional teams across different geographies will be a key aspect of this role. If you are action-oriented, independent, and possess the required technical skills, we encourage you to submit your resume to pallavi@she-jobs.com and explore this exciting opportunity further.,
Posted 1 week ago
10.0 - 14.0 years
0 Lacs
kolkata, west bengal
On-site
We are currently seeking a highly-skilled Data Science Senior Manager with a strong background in data analytics and management. The ideal candidate will have a proven track record of leveraging data-driven insights to drive business improvement and optimize overall business performance. You should have a minimum of 10-14 years of relevant experience in the field of Data Science. As a Data Science Senior Manager, your key responsibilities will include managing and leading a team of data scientists to achieve organizational goals, driving the collection, cleaning, and preprocessing of complex datasets, building predictive models and machine-learning algorithms, proposing solutions and strategies to business challenges, collaborating with engineering and product development teams, overseeing the development and implementation of ML algorithms and models, utilizing predictive modeling to increase and optimize customer experiences, revenue generation, campaign optimization, and other business outcomes, identifying valuable data sources and automating collection processes, undertaking data collection, preprocessing, and analysis, and communicating complex data or algorithms into insights that everyone can understand. You should have proficiency in SQL, Python, Pyspark, experience in data pre-processing, deep knowledge of Supervised ML, EDA, Unsupervised ML, strong understanding of Forecasting Algorithms, Ensemble Models, Causal approaches, excellent understanding of machine learning algorithms, processes, tools, and platforms, proven experience as a Data Science Manager or similar role, strong problem-solving and analytical skills, good communication skills with the ability to explain complex concepts to non-technical stakeholders, and strong leadership and project management skills. Qualifications should include a Bachelor's Degree in Computer Science, Statistics, Applied Math, or a related field with a Master's Degree preferred, and a minimum of 10-14 years of experience in data science or a related field. This is an excellent opportunity for individuals looking to apply their leadership and technical skills in a dynamic and fast-paced environment. We are an equal opportunity employer and value diversity and inclusion at our company. We encourage applications from all backgrounds. If you are passionate about data science and are eager to make a significant impact within a leading organization, apply today. Join us and be a part of our exciting journey.,
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
About The Role We are seeking a skilled and passionate Data Engineer to join our team and drive the development of scalable data pipelines for Generative AI (GenAI) and Large Language Model (LLM)-powered applications. This role demands hands-on expertise in Spark, GCP, and data integration with modern AI APIs. What You'll Do Design and develop high-throughput, scalable data pipelines for GenAI and LLM-based solutions. Build robust ETL/ELT processes using Spark (PySpark/Scala) on Google Cloud Platform (GCP). Integrate enterprise and unstructured data with LLM APIs such as OpenAI, Gemini, and Hugging Face. Process and enrich large volumes of unstructured data, including text and document embeddings. Manage real-time and batch workflows using Airflow, Dataflow, and BigQuery. Implement and maintain best practices for data quality, observability, lineage, and API-first designs. What Sets You Apart 3+ years of experience building scalable Spark-based pipelines (PySpark or Scala). Strong hands-on experience with GCP services: BigQuery, Dataproc, Pub/Sub, Cloud Functions. Familiarity with LLM APIs, vector databases (e.g., Pinecone, FAISS), and GenAI use cases. Expertise in text processing, unstructured data handling, and performance optimization. Agile mindset and the ability to thrive in a fast-paced startup or dynamic environment. Nice To Have Experience working with embeddings and semantic search. Exposure to MLOps or data observability tools. Background in deploying production-grade AI/ML workflows (ref:hirist.tech)
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
chennai, tamil nadu
On-site
We are looking for a Lead Data Engineer with over 8 years of experience in data engineering and software development. The ideal candidate should possess a strong expertise in Python, PySpark, Airflow (Batch Jobs), HPCC, and ECL. You will be responsible for driving complex data solutions across multi-functional teams. The role requires hands-on experience in data modeling, test-driven development, and familiarity with Agile/Waterfall methodologies. As a Lead Data Engineer, you will be leading initiatives, collaborating with various teams, and converting business requirements into scalable data solutions using industry best practices in managed services or staff augmentation environments. If you meet the above qualifications and are passionate about working with data to solve complex problems, we encourage you to apply for this exciting opportunity.,
Posted 1 week ago
16.0 - 20.0 years
0 Lacs
karnataka
On-site
A career within Data and Analytics services will provide you with the opportunity to help organisations uncover enterprise insights and drive business results using smarter data analytics. We focus on a collection of organisational technology capabilities, including business intelligence, data management, and data assurance that help our clients drive innovation, growth, and change within their organizations in order to keep up with the changing nature of customers and technology. We make impactful decisions by mixing mind and machine to leverage data, understand and navigate risk, and help our clients gain a competitive edge. As part of our Analytics and Insights Consumption team, you'll analyze data to drive useful insights for clients to address core business issues or to drive strategic outcomes. You'll use visualization, statistical and analytics models, AI/ML techniques, Modelops and other techniques to develop these insights. Candidates with 16+ years of hands-on experience are required for this role. **Required Skills** - 15 years of relevant experience in pharma & life sciences analytics, with knowledge of industry trends, regulations, and challenges. - Proven track record of working within the pharma and life sciences domain, addressing industry-specific issues and leveraging domain knowledge to drive results. - Knowledge of drug development processes, clinical trials, regulatory compliance, market access strategies, and commercial operations. - Strong knowledge of healthcare industry trends, regulations, and challenges. - Proficiency in data analysis and statistical modeling techniques. - Good knowledge of statistics, Data analysis hypothesis testing, and preparation for machine learning use cases. - Expertise in GenAI, AI/ML, and data engineering. - Experience in machine learning frameworks and tools (For e.g. scikit-learn, mlr, caret, H2O, TensorFlow, Pytorch, MLlib). - Familiarity with programming in SQL and Python/Pyspark to guide teams. - Familiarity with visualization tools for e.g.: Tableau, PowerBI, AWS QuickSight etc. - Excellent problem-solving and critical-thinking abilities. - Strong communication and presentation skills, with the ability to effectively convey complex concepts to both technical and non-technical stakeholders. - Leadership skills, with the ability to manage and mentor a team. - Project management skills, with the ability to prioritize tasks and meet deadlines. **Responsibilities** - Lead and manage the pharma life sciences analytics team, providing guidance, mentorship, and support to team members. - Collaborate with cross-functional teams to identify business challenges and develop data-driven solutions tailored to the pharma and life sciences sector. - Leverage in-depth domain knowledge across the pharma life sciences value chain, including R&D, drug manufacturing, commercial, pricing, product planning, product launch, market access, and revenue management. - Utilize data science, GenAI, AI/ML, and data engineering tools to extract, transform, and analyze data, generating insights and actionable recommendations. - Develop and implement statistical models and predictive analytics to support decision-making and improve healthcare outcomes. - Stay up-to-date with industry trends, regulations, and best practices, ensuring compliance and driving innovation. - Present findings and recommendations to clients and internal stakeholders, effectively communicating complex concepts in a clear and concise manner. - Collaborate with clients to understand their business objectives and develop customized analytics solutions to meet their needs. - Manage multiple projects simultaneously, ensuring timely delivery and high-quality results. - Continuously evaluate and improve analytics processes and methodologies, driving efficiency and effectiveness. - Stay informed about emerging technologies and advancements in pharma life sciences space, identifying opportunities for innovation and growth to provide thought leadership and subject matter expertise. **Professional And Educational Background** BE / B.Tech / MCA / M.Sc / M.E / M.Tech / MBA,
Posted 1 week ago
2.0 - 6.0 years
15 - 30 Lacs
Bengaluru
Work from Office
Be a part of a team that harnesses advanced AI, ML, and big data technologies to develop cutting-edge healthcare technology platform, delivering innovative business solutions. Job Title : Data Engineer II / Senior Data Engineer Job Location : Bengaluru, Pune - India Job Summary: We are a leading Software as a Service (SaaS) company that specializes in the transformation of data in the US healthcare industry through cutting-edge Artificial Intelligence (AI) solutions. We are looking for Software Developers, who should continually strive to advance engineering excellence and technology innovation. The mission is to power the next generation of digital products and services through innovation, collaboration, and transparency. You will be a technology leader and doer who enjoys working in a dynamic, fast-paced environment. Responsibilities: Design, develop, and maintain robust and scalable ETL/ELT pipelines to ingest and transform large datasets from various sources. Optimize and manage databases (SQL/NoSQL) to ensure efficient data storage, retrieval, and manipulation for both structured and unstructured data. Collaborate with data scientists, analysts, and engineers to integrate data from disparate sources and ensure smooth data flow between systems. Implement and maintain data validation and monitoring processes to ensure data accuracy, consistency, and availability. Automate repetitive data engineering tasks and optimize data workflows for performance and scalability. Work closely with cross-functional teams to understand their data needs and provide solutions that help scale operations. Ensure proper documentation of data engineering processes, workflows, and infrastructure for easy maintenance and scalability Roles and Responsibilities Design, develop, test, deploy, and maintain large-scale data pipelines using AWS services such as S3, Glue, Lambda, Step Functions. Collaborate with cross-functional teams to gather requirements and design solutions for complex data engineering projects. Develop ETL/ELT pipelines using Python scripts and SQL queries to extract insights from structured and unstructured data sources. Implement web scraping techniques to collect relevant data from various websites and APIs. Ensure high availability of the system by implementing monitoring tools like CloudWatch. Desired Profile: Bachelors or Masters degree in Computer Science, Information Technology, or a related field. 3-5 years of hands-on experience as a Data Engineer or in a related data-driven role. Strong experience with ETL tools like Apache Airflow, Talend, or Informatica. Expertise in SQL and NoSQL databases (e.g., MySQL, PostgreSQL, MongoDB, Cassandra). Strong proficiency in Python, Scala, or Java for data manipulation and pipeline development. Experience with cloud-based platforms (AWS, Google Cloud, Azure) and their data services (e.g., S3, Redshift, BigQuery). Familiarity with big data processing frameworks such as Hadoop, Spark, or Flink. Experience in data warehousing concepts and building data models (e.g., Snowflake, Redshift). Understanding of data governance, data security best practices, and data privacy regulations (e.g., GDPR, HIPAA). Familiarity with version control systems like Git.. HiLabs is an equal opportunity employer (EOE). No job applicant or employee shall receive less favorable treatment or be disadvantaged because of their gender, marital or family status, color, race, ethnic origin, religion, disability, or age; nor be subject to less favorable treatment or be disadvantaged on any other basis prohibited by applicable law. HiLabs is proud to be an equal opportunity workplace dedicated to pursuing and hiring a diverse and inclusive workforce to support individual growth and superior business results. Thank you for reviewing this opportunity with HiLabs! If this position appears to be a good fit for your skillset, we welcome your application.
Posted 1 week ago
5.0 - 9.0 years
0 Lacs
karnataka
On-site
Wipro Limited is a leading technology services and consulting company dedicated to developing innovative solutions that cater to clients" most intricate digital transformation requirements. With a comprehensive range of capabilities in consulting, design, engineering, and operations, we assist clients in achieving their most ambitious goals and establishing sustainable businesses that are future-ready. Our workforce of over 230,000 employees and business partners spread across 65 countries ensures that we fulfill our commitment to helping customers, colleagues, and communities thrive amidst a constantly changing world. As a Databricks Developer at Wipro, you will be expected to possess the following essential skills: - Cloud certification in Azure Data Engineer or related category - Proficiency in Azure Data Factory, Azure Databricks Spark (PySpark or Scala), SQL, Data Ingestion, and Curation - Experience in Semantic Modelling and Optimizing data models to function within Rahona - Familiarity with Azure data ingestion from on-prem sources such as mainframe, SQL server, and Oracle - Proficiency in Sqoop and Hadoop - Ability to use Microsoft Excel for metadata files containing ingestion requirements - Any additional certification in Azure/AWS/GCP and hands-on experience in cloud data engineering - Strong programming skills in Python, Scala, or Java This position is available in multiple locations including Pune, Bangalore, Coimbatore, and Chennai. The mandatory skill set required for this role is DataBricks - Data Engineering. The ideal candidate should have 5-8 years of experience in the field. At Wipro, we are in the process of building a modern organization that is committed to digital transformation. We are seeking individuals who are driven by the concept of reinvention - of themselves, their careers, and their skills. We encourage a culture of continuous evolution within our business and industry, adapting to the changing world around us. Join us in a purpose-driven environment that empowers you to craft your own reinvention. Realize your ambitions at Wipro, where applications from individuals with disabilities are highly encouraged.,
Posted 1 week ago
3.0 - 8.0 years
0 Lacs
karnataka
On-site
As an Operations Research Scientist at Tredence, your main responsibilities will involve performing data analysis and interpretation. This includes analyzing large datasets to extract meaningful insights and using Python to process, visualize, and interpret data in a clear and actionable manner. Additionally, you will be tasked with developing and implementing mathematical models to address complex business problems, as well as improving operational efficiency. You should have the ability to use solvers like CPLEX, Gurobi, FICCO Xpress, and free solvers like Pulp and Pyomo. Applying optimization techniques and heuristic methods to devise effective solutions will be a key part of your role, along with designing and implementing algorithms to solve optimization problems. Testing and validating these algorithms to ensure accuracy and efficiency are also important tasks. Collaboration and communication are essential skills in this role, as you will work closely with cross-functional teams, including data scientists, engineers, and business stakeholders, to understand requirements and deliver solutions. Presenting findings and recommendations to both technical and non-technical audiences will be part of your regular interactions. To qualify for this position, you should have a Bachelor's or Master's degree in operations research, applied mathematics, computer science, engineering, or a related field. A minimum of 3-8 years of professional experience in operations research, data science, or a related field is required. Proficiency in Python, including libraries such as NumPy, pandas, SciPy, and scikit-learn, as well as Pulp and Pyspark, is necessary. A strong background in mathematical modeling and optimization techniques, experience with heuristic methods and algorithm development, and the ability to analyze complex datasets and derive actionable insights are also important technical skills. Furthermore, excellent communication skills, both written and verbal, and the ability to work effectively both independently and as part of a team are essential soft skills for this role. Preferred qualifications include experience with additional programming languages or tools such as AMPL, LINGO, and AIMMS, as well as familiarity with machine learning techniques and their applications in operations research. About Tredence: Tredence, founded in 2013, is dedicated to transforming data into actionable insights for over 50 Fortune 500 clients in various industries. With headquarters in San Jose and a presence in 5 countries, Tredence's mission is to be the world's most indispensable analytics partner. By blending deep domain expertise with advanced AI and data science, Tredence aims to drive unparalleled business value. Join Tredence on this innovative journey and contribute to the impactful work being done in the analytics field.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
pune, maharashtra
On-site
The IT Quality Intermediate Analyst role is a developmental position within the Technology Quality job family at Citigroup. As an Intermediate Analyst, you will be responsible for independently addressing various challenges and have the freedom to solve complex problems. Your role will involve integrating specialized knowledge with industry standards, understanding how your team contributes to the overall objectives, and applying analytical thinking and data analysis tools effectively. Attention to detail is crucial in making judgments and recommendations based on factual information, as your decisions may have a broader business impact. Your responsibilities will include supporting initiatives related to User Acceptance Testing (UAT) processes and product rollouts. You will collaborate with technology project managers, UAT professionals, and users to design and implement appropriate scripts/plans for application testing. Additionally, you will support automation initiatives by using existing automation tools for testing and ensuring the automation of assigned tasks through analysis. In this role, you will conduct various process monitoring, product evaluation, and audit assignments of moderate complexity. You will report issues, make recommendations for solutions, and ensure project standards and procedures are documented and followed throughout the software development life cycle. Monitoring products and processes for conformance to standards and procedures, documenting findings, and conducting root cause analyses to provide recommended improvements will also be part of your responsibilities. You will need to gather, maintain, and create reports on quality metrics, exhibit a good understanding of procedures and concepts within your technical area, and have a basic knowledge of these elements in other areas. By making evaluative judgments based on factual information and resolving problems with acquired technical experience, you will directly impact the business and ensure the quality of work provided by yourself and others. Qualifications: - 3-6 years of Quality Assurance (QA) experience in the Financial Services industry preferred - Experience in Big Data, ETL testing, and requirement reviews - Understanding of QA within the Software Development Lifecycle (SDLC) and QA methodologies - Knowledge of Quality Processes - Logical analysis skills, attention to detail, and problem-solving abilities - Ability to work to deadlines - Clear and concise written and verbal communication skills - Experience in defining, designing, and executing test cases - Automation domain experience using Python tech stack - Experience in Python and Pyspark Education: - Bachelor's/University degree or equivalent experience You will also provide informal guidance to new team members, perform other assigned duties and functions, and assess risk when making business decisions to safeguard Citigroup and its assets. Your role will involve compliance with laws, rules, and regulations, as well as adherence to policies and ethical standards. If you require a reasonable accommodation due to a disability for using search tools or applying for a career opportunity, please review the Accessibility at Citi information. For more details on Citigroup's EEO Policy Statement and the Know Your Rights poster, please refer to the relevant documents.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
karnataka
On-site
Are you passionate about developing mission-critical, high-quality software solutions using cutting-edge technology in a dynamic environment Join Compliance Engineering, a global team of over 300 engineers and scientists working on the most complex, mission-critical problems. You will build and operate platforms and applications to prevent, detect, and mitigate regulatory and reputational risks, leveraging the latest technology and vast amounts of data. As part of a significant uplift and rebuild of the Compliance application portfolio, Compliance Engineering is seeking Systems Engineers. As a member of the team, you will partner with users, development teams, and colleagues globally to onboard new business initiatives, test Compliance Surveillance coverage, learn from experts, and mentor team members. You will work with technologies like Java, Python, PySpark, and Big Data tools to innovate, design, implement, test, and maintain software across products. The ideal candidate will have a Bachelor's or Master's degree in Computer Science or a related field, expertise in Java development, debugging, and problem-solving, as well as experience in project management. Strong communication skills are essential. Desired experience includes relational databases, Hadoop, big data technologies, knowledge of the financial industry (especially Capital Markets domain), compliance, or risk functions. Goldman Sachs, a leading global investment banking, securities, and investment management firm, commits to diversity, inclusion, and individual growth. They provide various opportunities for professional and personal development, fostering a culture of diversity and inclusion. Goldman Sachs is an equal employment/affirmative action employer. Accommodations for candidates with special needs or disabilities are available during the recruiting process. Learn more at GS.com/careers.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As an Azure Data Engineer within our team, you will play a crucial role in enhancing and supporting existing Data & Analytics solutions by utilizing Azure Data Engineering technologies. Your primary focus will be on developing, maintaining, and deploying IT products and solutions that cater to various business users, with a strong emphasis on performance, scalability, and reliability. Your responsibilities will include incident classification and prioritization, log analysis, coordination with SMEs, escalation of complex issues, root cause analysis, stakeholder communication, code reviews, bug fixing, enhancements, and performance tuning. You will design, develop, and support data pipelines using Azure services, implement ETL techniques, cleanse and transform datasets, orchestrate workflows, and collaborate with both business and technical teams. To excel in this role, you should possess 3 to 6 years of experience in IT and Azure data engineering technologies, with a strong command over Azure Databricks, Azure Synapse, ADLS Gen2, Python, PySpark, SQL, JSON, Parquet, Teradata, Snowflake, Azure DevOps, and CI/CD pipeline deployments. Knowledge of Data Warehousing concepts, data modeling best practices, and familiarity with SNOW (ServiceNow) will be advantageous. In addition to technical skills, you should demonstrate the ability to work independently and in virtual teams, strong analytical and problem-solving abilities, experience in Agile practices, effective task and time management, and clear communication and documentation skills. Experience with Business Intelligence tools, particularly Power BI, and possessing the DP-203 certification (Azure Data Engineer Associate) will be considered a plus. Join us in Chennai, Tamilnadu, India, and be part of our dynamic team working in the FMCG/Foods/Beverage domain.,
Posted 1 week ago
8.0 - 12.0 years
0 Lacs
maharashtra
On-site
You will be leveraging your technical skills to source and prepare data from a variety of data sources, such as traditional databases, no-SQL, Hadoop, and Cloud. Your role will involve working closely with data analytics staff within our team to understand requirements and collaborate on optimizing solutions while developing new ideas. Additionally, you will be collaborating with the Data Domain Architect lead on all aspects of Data Domain Architecture, including resource management. Furthermore, you will engage with Tech, Product, and CIB data partners to conduct research and implement use cases effectively. To be successful in this role, you should have a minimum of 8+ years of relevant work experience in roles such as software developer, data/ML engineer, data scientist, or business intelligence engineer. A Bachelor's degree in Computer Science/Financial Engineering, MIS, Mathematics, Statistics, or another quantitative subject is required. You should possess strong analytical thinking and problem-solving skills, along with the ability to grasp business requirements and effectively communicate complex information to diverse audiences. Collaboration across teams and at varying levels using a consultative approach is essential, as well as a general understanding of Agile Methodology. Moreover, you should have knowledge of cloud platforms and hands-on experience with tools like Databricks or Snowflake. Proficiency in traditional databases such as Oracle and SQL Server, as well as strong overall SQL skills and experience with Python/PySpark are necessary. Familiarity with ETL framework and tools, including Alteryx, is expected. A fundamental understanding of Data Architecture, the ability to profile, clean, and extract data from various sources, and experience in developing analytics and insights using tools like Tableau and Alteryx are important. Exposure to data science, AI/ML, and model development will be beneficial in this role.,
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39817 Jobs | Dublin
Wipro
19388 Jobs | Bengaluru
Accenture in India
15458 Jobs | Dublin 2
EY
14907 Jobs | London
Uplers
11185 Jobs | Ahmedabad
Amazon
10459 Jobs | Seattle,WA
IBM
9256 Jobs | Armonk
Oracle
9226 Jobs | Redwood City
Accenture services Pvt Ltd
7971 Jobs |
Capgemini
7704 Jobs | Paris,France