Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
5.0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Objectives and Purpose The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs. The Senior Data Engineer will: Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your Key Responsibilities Data Engineering Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes, systems to provide accurate and available data to key stakeholders, downstream systems, & business processes. Mentor and coach staff data engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Support standardization, customization and ad hoc data analysis and develop the mechanisms to ingest, analyze, validate, normalize, and clean data. Write unit/integration/performance test scripts and perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Implement processes and systems to drive data reconciliation and monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes. Lead the evaluation, implementation and deployment of emerging tools and processes for analytic data engineering to improve productivity. Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes. Learn about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics. Solve complex data problems to deliver insights that help achieve business objectives. Implement statistical data quality procedures on new data sources by applying rigorous iterative data analytics. Relationship Building and Collaboration Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Support Data Scientists in data sourcing and preparation to visualize data and synthesize insights of commercial value. Collaborate with AI/ML engineers to create data products for analytics and data scientist team members to improve productivity. Advise, consult, mentor and coach other data and analytic professionals on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Skills And Attributes For Success Technical/Functional Expertise Advanced experience and understanding of data/Big Data, data integration, data modelling, AWS, and cloud technologies. Strong business acumen with knowledge of the Pharmaceutical, Healthcare, or Life Sciences sector is preferred, but not required. Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata. Ability to build and optimize queries (SQL), data sets, 'Big Data' pipelines, and architectures for structured and unstructured data. Experience with or knowledge of Agile Software Development methodologies. Leadership Strategic mindset of thinking above the minor, tactical details and focusing on the long-term, strategic goals of the organization. Advocate of a culture of collaboration and psychological safety. Decision-making and Autonomy Shift from manual decision-making to data-driven, strategic decision-making. Proven track record of applying critical thinking to resolve issues and overcome obstacles. Interaction Proven track record of collaboration and developing strong working relationships with key stakeholders by building trust and being a true business partner. Demonstrated success in collaborating with different IT functions, contractors, and constituents to deliver data solutions that meet standards and security measures. Innovation Passion for re-imagining new solutions, processes, and end-user experience by leveraging digital and disruptive technologies and developing advanced data and analytics solutions. Advocate of a culture of growth mindset, agility, and continuous improvement. Complexity Demonstrates high multicultural sensitivity to lead teams effectively. Ability to coordinate and problem-solve amongst larger teams. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Science, or related field 5+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Cloud platform deployment and tools (e.g., Kubernetes) Relational SQL databases DevOps and continuous integration AWS cloud services and technologies (i.e., Lambda, S3, DMS, Step Functions, Event Bridge, Cloud Watch, RDS) Databricks/ETL IICS/DMS GitHub Event Bridge, Tidal Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Desired Skillsets Masters Degree in Engineering, Computer Science, Data Science, or related field Experience in a global working environment Travel Requirements Access to transportation to attend meetings Ability to fly to meetings regionally and globally EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 6 days ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform. Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation. Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Preferred Education Non-Degree Program Required Technical And Professional Expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms. Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred Technical And Professional Experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Objectives and Purpose The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs. The Senior Data Engineer will: Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your Key Responsibilities Data Engineering Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes, systems to provide accurate and available data to key stakeholders, downstream systems, & business processes. Mentor and coach staff data engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Support standardization, customization and ad hoc data analysis and develop the mechanisms to ingest, analyze, validate, normalize, and clean data. Write unit/integration/performance test scripts and perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Implement processes and systems to drive data reconciliation and monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes. Lead the evaluation, implementation and deployment of emerging tools and processes for analytic data engineering to improve productivity. Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes. Learn about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics. Solve complex data problems to deliver insights that help achieve business objectives. Implement statistical data quality procedures on new data sources by applying rigorous iterative data analytics. Relationship Building and Collaboration Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Support Data Scientists in data sourcing and preparation to visualize data and synthesize insights of commercial value. Collaborate with AI/ML engineers to create data products for analytics and data scientist team members to improve productivity. Advise, consult, mentor and coach other data and analytic professionals on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Skills And Attributes For Success Technical/Functional Expertise Advanced experience and understanding of data/Big Data, data integration, data modelling, AWS, and cloud technologies. Strong business acumen with knowledge of the Pharmaceutical, Healthcare, or Life Sciences sector is preferred, but not required. Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata. Ability to build and optimize queries (SQL), data sets, 'Big Data' pipelines, and architectures for structured and unstructured data. Experience with or knowledge of Agile Software Development methodologies. Leadership Strategic mindset of thinking above the minor, tactical details and focusing on the long-term, strategic goals of the organization. Advocate of a culture of collaboration and psychological safety. Decision-making and Autonomy Shift from manual decision-making to data-driven, strategic decision-making. Proven track record of applying critical thinking to resolve issues and overcome obstacles. Interaction Proven track record of collaboration and developing strong working relationships with key stakeholders by building trust and being a true business partner. Demonstrated success in collaborating with different IT functions, contractors, and constituents to deliver data solutions that meet standards and security measures. Innovation Passion for re-imagining new solutions, processes, and end-user experience by leveraging digital and disruptive technologies and developing advanced data and analytics solutions. Advocate of a culture of growth mindset, agility, and continuous improvement. Complexity Demonstrates high multicultural sensitivity to lead teams effectively. Ability to coordinate and problem-solve amongst larger teams. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Science, or related field 5+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Cloud platform deployment and tools (e.g., Kubernetes) Relational SQL databases DevOps and continuous integration AWS cloud services and technologies (i.e., Lambda, S3, DMS, Step Functions, Event Bridge, Cloud Watch, RDS) Databricks/ETL IICS/DMS GitHub Event Bridge, Tidal Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Desired Skillsets Masters Degree in Engineering, Computer Science, Data Science, or related field Experience in a global working environment Travel Requirements Access to transportation to attend meetings Ability to fly to meetings regionally and globally EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Objectives and Purpose The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs. The Senior Data Engineer will: Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your Key Responsibilities Data Engineering Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes, systems to provide accurate and available data to key stakeholders, downstream systems, & business processes. Mentor and coach staff data engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Support standardization, customization and ad hoc data analysis and develop the mechanisms to ingest, analyze, validate, normalize, and clean data. Write unit/integration/performance test scripts and perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Implement processes and systems to drive data reconciliation and monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes. Lead the evaluation, implementation and deployment of emerging tools and processes for analytic data engineering to improve productivity. Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes. Learn about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics. Solve complex data problems to deliver insights that help achieve business objectives. Implement statistical data quality procedures on new data sources by applying rigorous iterative data analytics. Relationship Building and Collaboration Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Support Data Scientists in data sourcing and preparation to visualize data and synthesize insights of commercial value. Collaborate with AI/ML engineers to create data products for analytics and data scientist team members to improve productivity. Advise, consult, mentor and coach other data and analytic professionals on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Skills And Attributes For Success Technical/Functional Expertise Advanced experience and understanding of data/Big Data, data integration, data modelling, AWS, and cloud technologies. Strong business acumen with knowledge of the Pharmaceutical, Healthcare, or Life Sciences sector is preferred, but not required. Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata. Ability to build and optimize queries (SQL), data sets, 'Big Data' pipelines, and architectures for structured and unstructured data. Experience with or knowledge of Agile Software Development methodologies. Leadership Strategic mindset of thinking above the minor, tactical details and focusing on the long-term, strategic goals of the organization. Advocate of a culture of collaboration and psychological safety. Decision-making and Autonomy Shift from manual decision-making to data-driven, strategic decision-making. Proven track record of applying critical thinking to resolve issues and overcome obstacles. Interaction Proven track record of collaboration and developing strong working relationships with key stakeholders by building trust and being a true business partner. Demonstrated success in collaborating with different IT functions, contractors, and constituents to deliver data solutions that meet standards and security measures. Innovation Passion for re-imagining new solutions, processes, and end-user experience by leveraging digital and disruptive technologies and developing advanced data and analytics solutions. Advocate of a culture of growth mindset, agility, and continuous improvement. Complexity Demonstrates high multicultural sensitivity to lead teams effectively. Ability to coordinate and problem-solve amongst larger teams. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Science, or related field 5+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Cloud platform deployment and tools (e.g., Kubernetes) Relational SQL databases DevOps and continuous integration AWS cloud services and technologies (i.e., Lambda, S3, DMS, Step Functions, Event Bridge, Cloud Watch, RDS) Databricks/ETL IICS/DMS GitHub Event Bridge, Tidal Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Desired Skillsets Masters Degree in Engineering, Computer Science, Data Science, or related field Experience in a global working environment Travel Requirements Access to transportation to attend meetings Ability to fly to meetings regionally and globally EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 6 days ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Family Data Science & Analysis (India) Travel Required None Clearance Required None What You Will Do Design, develop, and maintain robust, scalable, and efficient data pipelines and ETL/ELT processes. Lead and execute data engineering projects from inception to completion, ensuring timely delivery and high quality. Build and optimize data architectures for operational and analytical purposes. Collaborate with cross-functional teams to gather and define data requirements. Implement data quality, data governance, and data security practices. Manage and optimize cloud-based data platforms (Azure\AWS). Develop and maintain Python/PySpark libraries for data ingestion, Processing and integration with both internal and external data sources. Design and optimize scalable data pipelines using Azure data factory and Spark(Databricks) Work with stakeholders, including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Develop frameworks for data ingestion, transformation, and validation. Mentor junior data engineers and guide best practices in data engineering. Evaluate and integrate new technologies and tools to improve data infrastructure. Ensure compliance with data privacy regulations (HIPAA, etc.). Monitor performance and troubleshoot issues across the data ecosystem. Automated deployment of data pipelines using GIT hub actions \ Azure devops What You Will Need Bachelors or master’s degree in computer science, Information Systems, Statistics, Math, Engineering, or related discipline. Minimum 5 + years of solid hands-on experience in data engineering and cloud services. Extensive working experience with advanced SQL and deep understanding of SQL. Good Experience in Azure data factory (ADF), Databricks , Python and PySpark. Good experience in modern data storage concepts data lake, lake house. Experience in other cloud services (AWS) and data processing technologies will be added advantage. Ability to enhance , develop and resolve defects in ETL process using cloud services. Experience handling large volumes (multiple terabytes) of incoming data from clients and 3rd party sources in various formats such as text, csv, EDI X12 files and access database. Experience with software development methodologies (Agile, Waterfall) and version control tools Highly motivated, strong problem solver, self-starter, and fast learner with demonstrated analytic and quantitative skills. Good communication skill. What Would Be Nice To Have AWS ETL Platform – Glue , S3 One or more programming languages such as Java, .Net Experience in US health care domain and insurance claim processing. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee. Show more Show less
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Summary Position Summary Job Title: Senior Data Scientist/Team Lead Job Summary: We are seeking a Senior Data Scientist with hand-on experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and lead a team to implement data-driven solutions. Key Responsibilities: Lead and deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Lead a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 6-10 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field Led a 3-5 member team on multiple end to end DS/ML projects Excellent communication and client/stakeholder management skills Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300022 Show more Show less
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Summary Position Summary Job Title: Senior Data Scientist/Team Lead Job Summary: We are seeking a Senior Data Scientist with hand-on experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and lead a team to implement data-driven solutions. Key Responsibilities: Lead and deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Lead a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 6-10 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field Led a 3-5 member team on multiple end to end DS/ML projects Excellent communication and client/stakeholder management skills Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300022 Show more Show less
Posted 6 days ago
5.0 - 10.0 years
18 - 22 Lacs
Gurugram
Hybrid
Senior data engineer - Python, Pyspark, AWS - 5+ years Gurgaon Summary: An excellent opportunity for someone having a minimum of five years of experience with expertise in building data pipelines. A person must have experience in Python, Pyspark and AWS. Location- Gurgaon (Hybrid) Your Future Employer- One of the largest insurance providers. Responsibilities- To design, develop, and maintain large-scale data pipelines that can handle large datasets from multiple sources. Real-time data replication and batch processing of data using distributed computing platforms like Spark, Kafka, etc. To optimize the performance of data processing jobs and ensure system scalability and reliability. To collaborate with DevOps teams to manage infrastructure, including cloud environments like AWS. To collaborate with data scientists, analysts, and business stakeholders to develop tools and platforms that enable advanced analytics and reporting. Requirements- Hands-on experience with AWS services such as S3, DMS, Lambda, EMR, Glue, Redshift, RDS (Postgres) Athena, Kinesics, etc. Expertise in data modeling and knowledge of modern file and table formats. Proficiency in programming languages such as Python, PySpark, and SQL/PLSQL for implementing data pipelines and ETL processes. Experience data architecting or deploying Cloud/Virtualization solutions (Like Data Lake, EDW, Mart ) in the enterprise. Cloud/hybrid cloud (preferably AWS) solution for data strategy for Data lake, BI and Analytics. What is in for you- A stimulating working environment with equal employment opportunities. Growing of skills while working with industry leaders and top brands. A meritocratic culture with great career progression. Reach us- If you feel that you are the right fit for the role please share your updated CV at randhawa.harmeen@crescendogroup.in Disclaimer- Crescendo Global specializes in Senior to C-level niche recruitment. We are passionate about empowering job seekers and employers with an engaging memorable job search and leadership hiring experience. Crescendo Global does not discriminate on the basis of race, religion, color, origin, gender, sexual orientation, age, marital status, veteran status, or disability status.
Posted 6 days ago
7.0 - 9.0 years
15 - 18 Lacs
Pune
Work from Office
We are looking for a highly skilled Senior Databricks Developer to join our data engineering team. You will be responsible for building scalable and efficient data pipelines using Databricks, Apache Spark, Delta Lake, and cloud-native services (Azure/AWS/GCP). You will work closely with data architects, data scientists, and business stakeholders to deliver high-performance, production-grade solutions. Key Responsibilities : - Design, build, and maintain scalable and efficient data pipelines on Databricks using PySpark, Spark SQL, and optionally Scala. - Work with Databricks components including Workspace, Jobs, DLT (Delta Live Tables), Repos, and Unity Catalog. - Implement and optimize Delta Lake solutions aligned with Lakehouse and Medallion architecture best practices. - Collaborate with data architects, engineers, and business teams to understand requirements and deliver production-grade solutions. - Integrate CI/CD pipelines using tools such as Azure DevOps, GitHub Actions, or similar for Databricks deployments. - Ensure data quality, consistency, governance, and security by using tools like Unity Catalog or Azure Purview. - Use orchestration tools such as Apache Airflow, Azure Data Factory, or Databricks Workflows to schedule and monitor pipelines. - Apply strong SQL skills and data warehousing concepts in data modeling and transformation logic. - Communicate effectively with technical and non-technical stakeholders to translate business requirements into technical solutions. Required Skills and Qualifications : - Hands-on experience in data engineering, with specifically in Databricks. - Deep expertise in Databricks Workspace, Jobs, DLT, Repos, and Unity Catalog. - Strong programming skills in PySpark, Spark SQL; Scala experience is a plus. - Proficient in working with one or more cloud platforms : Azure, AWS, or GCP. - Experience with Delta Lake, Lakehouse architecture, and medallion architecture patterns. - Proficient in building CI/CD pipelines for Databricks using DevOps tools. - Familiarity with orchestration and ETL/ELT tools such as Airflow, ADF, or Databricks Workflows. - Strong understanding of data governance, metadata management, and lineage tracking. - Excellent analytical, communication, and stakeholder management skills.
Posted 6 days ago
2.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the AI & Engineering (AI&E) practice, our AI & Data offering helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe action Job Title: Data Scientist/Machine Learning Engg Job Summary:We are seeking a Data Scientist with experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and collaborate with a team to implement data-driven solutions. Key Responsibilities : Deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Work with a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications : 2-7 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300100 Show more Show less
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
Greater Kolkata Area
On-site
Summary Position Summary Job Title: Senior Data Scientist/Team Lead Job Summary: We are seeking a Senior Data Scientist with hand-on experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and lead a team to implement data-driven solutions. Key Responsibilities: Lead and deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Lead a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 6-10 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field Led a 3-5 member team on multiple end to end DS/ML projects Excellent communication and client/stakeholder management skills Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300022 Show more Show less
Posted 6 days ago
2.0 - 7.0 years
0 Lacs
Greater Kolkata Area
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the AI & Engineering (AI&E) practice, our AI & Data offering helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe action Job Title: Data Scientist/Machine Learning Engg Job Summary:We are seeking a Data Scientist with experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and collaborate with a team to implement data-driven solutions. Key Responsibilities : Deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Work with a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications : 2-7 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300100 Show more Show less
Posted 6 days ago
2.0 - 7.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the AI & Engineering (AI&E) practice, our AI & Data offering helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe action Job Title: Data Scientist/Machine Learning Engg Job Summary:We are seeking a Data Scientist with experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and collaborate with a team to implement data-driven solutions. Key Responsibilities : Deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Work with a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications : 2-7 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300100 Show more Show less
Posted 6 days ago
5.0 - 7.0 years
9 - 13 Lacs
Bengaluru
Work from Office
What you’ll do: Utilize advanced mathematical, statistical, and analytical expertise to research, collect, analyze, and interpret large datasets from internal and external sources to provide insight and develop data driven solutions across the company Build and test predictive models including but not limited to credit risk, fraud, response, and offer acceptance propensity Responsible for the development, testing, validation, tracking, and performance enhancement of statistical models and other BI reporting tools leading to new innovative origination strategies within marketing, sales, finance, and underwriting Leverage advanced analytics to develop innovative portfolio surveillance solutions to track and forecast loan losses, that influence key business decisions related to pricing optimization, credit policy and overall profitability strategy Use decision science methodologies and advanced data visualization techniques to implement creative automation solutions within the organization Initiate and lead analysis to bring actionable insights to all areas of the business including marketing, sales, collections, and credit decisioning Develop and refine unit economics models to enable marketing and credit decisions What you’ll need: 5 to 8 years of experience in data science or a related role with a focus on Python programming and ML models. Strong in Python, experience with Jupyter notebooks, and Python packages like polars, pandas, numpy, scikit-learn, matplotlib, etc. Experience with ML lifecycle: data preparation, training, evaluation, and deployment Hands-on experience with GCP services for ML & data science Experience with Vector Search and Hybrid Search techniques Embeddings generation using BERT, Sentence Transformers, or custom models Embedding indexing and retrieval (Elastic, FAISS, ScaNN, Annoy) Experience with LLMs and use cases like RAG Understanding of semantic vs lexical search paradigms Experience with Learning to Rank (LTR) and libraries like XGBoost, LightGBM with LTR support Proficient in SQL and BigQuery Experience with Dataproc clusters for distributed data processing using Apache Spark or PySpark Model deployment using Vertex AI, Cloud Run, or Cloud Functions Familiarity with BM25 ranking (Elasticsearch or OpenSearch) and vector blending Awareness of search relevance evaluation metrics (precision@k, recall, nDCG, MRR) Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!
Posted 6 days ago
3.0 - 5.0 years
5 - 8 Lacs
Bengaluru
Work from Office
What you’ll be doing: Assist in developing machine learning models based on project requirements Work with datasets by preprocessing, selecting appropriate data representations, and ensuring data quality. Performing statistical analysis and fine-tuning using test results. Support training and retraining of ML systems as needed. Help build data pipelines for collecting and processing data efficiently. Follow coding and quality standards while developing AI/ML solutions Contribute to frameworks that help operationalize AI models What we seek in you: Strong on programming languages like Python, Java One cloud hands-on experience (GCP preferred) Experience working with Dockers Environments managing (e.g venv, pip, poetry, etc.) Experience with orchestrators like Vertex AI pipelines, Airflow, etc Understanding of full ML Cycle end-to-end Data engineering, Feature Engineering techniques Experience with ML modelling and evaluation metrics Experience with Tensorflow, Pytorch or another framework Experience with Models monitoring Advance SQL knowledge Aware of Streaming concepts like Windowing, Late arrival, Triggers etc Storage: CloudSQL, Cloud Storage, Cloud Bigtable, Bigquery, Cloud Spanner, Cloud DataStore, Vector database Ingest: Pub/Sub, Cloud Functions, AppEngine, Kubernetes Engine, Kafka, Micro services Schedule: Cloud Composer, Airflow Processing: Cloud Dataproc, Cloud Dataflow, Apache Spark, Apache Flink CI/CD: Bitbucket+Jenkins / Gitlab, Infrastructure as a tool: Terraform Life at Next: At our core, we're driven by the mission of tailoring growth for our customers by enabling them to transform their aspirations into tangible outcomes. We're dedicated to empowering them to shape their futures and achieve ambitious goals. To fulfil this commitment, we foster a culture defined by agility, innovation, and an unwavering commitment to progress. Our organizational framework is both streamlined and vibrant, characterized by a hands-on leadership style that prioritizes results and fosters growth. Perks of working with us: Clear objectives to ensure alignment with our mission, fostering your meaningful contribution. Abundant opportunities for engagement with customers, product managers, and leadership. You'll be guided by progressive paths while receiving insightful guidance from managers through ongoing feedforward sessions. Cultivate and leverage robust connections within diverse communities of interest. Choose your mentor to navigate your current endeavors and steer your future trajectory. Embrace continuous learning and upskilling opportunities through Nexversity. Enjoy the flexibility to explore various functions, develop new skills, and adapt to emerging technologies. Embrace a hybrid work model promoting work-life balance. Access comprehensive family health insurance coverage, prioritizing the well-being of your loved ones. Embark on accelerated career paths to actualize your professional aspirations. Who we are? We enable high growth enterprises build hyper personalized solutions to transform their vision into reality. With a keen eye for detail, we apply creativity, embrace new technology and harness the power of data and AI to co-create solutions tailored made to meet unique needs for our customers. Join our passionate team and tailor your growth with us!
Posted 6 days ago
2.0 - 7.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the AI & Engineering (AI&E) practice, our AI & Data offering helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe action Job Title: Data Scientist/Machine Learning Engg Job Summary:We are seeking a Data Scientist with experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and collaborate with a team to implement data-driven solutions. Key Responsibilities : Deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Work with a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications : 2-7 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300100 Show more Show less
Posted 6 days ago
4.0 - 8.0 years
7 - 11 Lacs
Hyderabad, Bengaluru
Hybrid
Job Summary We are seeking a skilled Azure Data Engineer with 4 years of overall experience , including at least 2 years of hands-on experience with Azure Databricks (Must) . The ideal candidate will have strong expertise in building and maintaining scalable data pipelines and working across cloud-based data platforms. Key Responsibilities Design, develop, and optimize large-scale data pipelines using Azure Data Factory, Azure Databricks, and Azure Synapse. Implement data lake solutions and work with structured and unstructured datasets in Azure Data Lake Storage (ADLS). Collaborate with data scientists, analysts, and engineering teams to design and deliver end-to-end data solutions. Develop ETL/ELT processes and integrate data from multiple sources. Monitor, debug, and optimize workflows for performance and cost-efficiency. Ensure data governance, quality, and security best practices are maintained. Must-Have Skills 4+ years of total experience in data engineering. 2+ years of experience with Azure Databricks (PySpark, Notebooks, Delta Lake) . Strong experience with Azure Data Factory , Azure SQL , and ADLS . Proficient in writing SQL queries and Python/Scala scripting. Understanding of CI/CD pipelines and version control systems (e.g., Git). Solid grasp of data modeling and warehousing concepts.
Posted 6 days ago
5.0 - 7.0 years
9 - 11 Lacs
Hyderabad
Work from Office
Role: PySpark DeveloperLocations:MultipleWork Mode: Hybrid Interview Mode: Virtual (2 Rounds) Type: Contract-to-Hire (C2H) Job Summary We are looking for a skilled PySpark Developer with hands-on experience in building scalable data pipelines and processing large datasets. The ideal candidate will have deep expertise in Apache Spark, Python, and working with modern data engineering tools in cloud environments such as AWS. Key Skills & Responsibilities Strong expertise in PySpark and Apache Spark for batch and real-time data processing. Experience in designing and implementing ETL pipelines, including data ingestion, transformation, and validation. Proficiency in Python for scripting, automation, and building reusable components. Hands-on experience with scheduling tools like Airflow or Control-M to orchestrate workflows. Familiarity with AWS ecosystem, especially S3 and related file system operations. Strong understanding of Unix/Linux environments and Shell scripting. Experience with Hadoop, Hive, and platforms like Cloudera or Hortonworks. Ability to handle CDC (Change Data Capture) operations on large datasets. Experience in performance tuning, optimizing Spark jobs, and troubleshooting. Strong knowledge of data modeling, data validation, and writing unit test cases. Exposure to real-time and batch integration with downstream/upstream systems. Working knowledge of Jupyter Notebook, Zeppelin, or PyCharm for development and debugging. Understanding of Agile methodologies, with experience in CI/CD tools (e.g., Jenkins, Git). Preferred Skills Experience in building or integrating APIs for data provisioning. Exposure to ETL or reporting tools such as Informatica, Tableau, Jasper, or QlikView. Familiarity with AI/ML model development using PySpark in cloud environments
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary Job Title: Senior Data Scientist/Team Lead Job Summary: We are seeking a Senior Data Scientist with hand-on experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and lead a team to implement data-driven solutions. Key Responsibilities: Lead and deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Lead a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 6-10 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field Led a 3-5 member team on multiple end to end DS/ML projects Excellent communication and client/stakeholder management skills Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300022 Show more Show less
Posted 6 days ago
4.0 - 8.0 years
15 - 17 Lacs
Noida, Pune, Bengaluru
Work from Office
Job Description - We are hiring a Data Engineer with strong expertise in JAVA , Apache Spark and AWS Cloud . You will design and develop high-performance, scalable applications and data pipelines for cloud-based environments. Key Skills: 3+ years in Java (Java 8+) , Spring Boot , and REST APIs 3+ years in Apache Spark (Core, SQL, Streaming) Strong hands-on with AWS services : S3, EC2, Lambda, Glue, EMR Experience with microservices , CI/CD , and Git Good understanding of distributed systems and performance tuning Nice to Have: Experience with Kafka , Airflow , Docker , or Kubernetes AWS Certification (Developer/Architect)
Posted 6 days ago
2.0 - 7.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the AI & Engineering (AI&E) practice, our AI & Data offering helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe action Job Title: Data Scientist/Machine Learning Engg Job Summary:We are seeking a Data Scientist with experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and collaborate with a team to implement data-driven solutions. Key Responsibilities : Deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Work with a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications : 2-7 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300100 Show more Show less
Posted 6 days ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Credit risk - Data Analyst (Python +Pyspark) About Us “Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. Job Description Role: Credit Risk Data Analyst / Senior Data Analyst Location : Bangalore/ Pune Experience: 8+ years Location: Bangalore/ Pune Responsibilities Define and obtain source data required to successfully deliver insights and use cases. Determine the data mapping required to join multiple data sets together across multiple sources Create methods to highlight and report data inconsistencies, allowing users to review and provide feedback on Propose suitable data migration sets to the relevant stakeholders Assist teams with processing the data migration sets as required Assist with the planning, tracking and coordination of the data migration team and with the migration run-book and the scope for each customer. Role Description Good understanding of Wholesale Business within Banking with a focus on Credit & Lending. Hands-on experience on advanced programming languages (e.g., Python or Pyspark, SQL). Hands-on experience of application of advanced analytics techniques on the banking data for insight generation. Possess excellent data analysis skills to perform root cause assessments and propose remediation on issues experienced by Risk & Wholesale businesses. Applying business and data design principles, risk management, and technology architecture when interacting with delivery partners. Knowledge of Credit Risk Frameworks such as Basel II, III, IFRS 9 and Stress Testing and understanding their drivers- advantageous. Data visualization tools (e.g., Tableau or Qlik Sense) - beneficial. Retail Credit / Traded Credit knowledge - applications will be considered Show more Show less
Posted 6 days ago
3.0 - 6.0 years
5 - 10 Lacs
Jaipur
Work from Office
Role & responsibilities: Design, develop, and maintain robust ETL/ELT pipelines to ingest and process data from multiple sources. Build and maintain scalable and reliable data warehouses, data lakes, and data marts. Collaborate with data scientists, analysts, and business stakeholders to understand data needs and deliver solutions. Ensure data quality, integrity, and security across all data systems. Optimize data pipeline performance and troubleshoot issues in a timely manner. Implement data governance and best practices in data management. Automate data validation, monitoring, and reporting processes. Preferred candidate profile: Bachelor's or Masters degree in Computer Science, Engineering, Information Systems, or related field. Proven experience (X+ years) as a Data Engineer or similar role. Strong programming skills in Python, Java, or Scala. Proficiency with SQL and working knowledge of relational databases (e.g., PostgreSQL, MySQL). Hands-on experience with big data technologies (e.g., Spark, Hadoop). Familiarity with cloud platforms such as AWS, GCP, or Azure (e.g., S3, Redshift, BigQuery, Data Factory). Experience with orchestration tools like Airflow or Prefect. Knowledge of data modeling, warehousing, and architecture design principles. Strong problem-solving skills and attention to detail. Perks and benefits Free Meals PF and Gratuity Medical and Term Insurance
Posted 6 days ago
3.0 - 5.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
About The Advanced Analytics Team The central Advanced Analytics team at the Abbott Established Pharma Division’s (EPD) headquarters in Basel helps define and lead the transformation towards becoming a global, data-driven company with the help of data and advanced technologies (e.g., Machine Learning, Deep Learning, Generative AI, Computer Vision). To us, Advanced Analytics is an important lever to reach our business targets, now and in the future; It helps differentiate ourselves from our competition and ensure sustainable revenue growth at optimal margins. Hence the central AA team is an integral part of the Strategy Management Office at EPD that has a very close link and regular interactions with the EPD Senior Leadership Team. Primary Job Function With the above requirements in mind, EPD is looking to fill a role of a Cloud Engineer reporting to the Head of AA Product Development. The Cloud Engineer will be responsible for developing applications leveraging AWS services. This role involves leading cloud initiatives, ensuring robust cloud infrastructure, and driving innovation in cloud technologies to support the business's advanced analytics needs. Core Job Responsibilities Support the development and maintenance of company-wide frameworks and libraries that enable faster, better, and more informed decision-making within the business, creating significant business value from data & analytics. Ensure data availability and accessibility for prioritized Advanced Analytics scope, and maintain stable, scalable, and modular data science pipelines from data exploration to deployment. Acquire, ingest, and process data from multiple sources and systems into our cloud platform (AWS), ensuring data integrity and security. Collaborate with data scientists to map data fields to hypotheses, and curate, wrangle, and prepare data for advanced analytical models. Implement and manage robust security measures to ensure compliant handling and management of data, including access strategies aligned with Information Security, Cyber Security, and Data Privacy principles. Develop and deploy smart automation tools based on cloud technologies, aligned with business priorities and needs. Oversee the timely delivery of Advanced Analytics solutions in coordination with the rest of the team and per requirements and timelines, ensuring alignment with business goals. Collaborate closely with the Data Science team and AI Engineers to understand platform needs and lead the development of solutions that support their work. Troubleshoot and resolve issues related to the AWS platform, ensuring minimal downtime and optimal performance. Define and document best practices and strategies regarding application deployment and infrastructure maintenance. Drive continuous improvement of the AWS Cloud platform by contributing and implementing new ideas and processes. Supervisory/Management Responsibilities Direct Reports: None. Indirect Reports: None. Position Accountability/Scope The Cloud Engineer is accountable for delivering targeted business impact per initiative in collaboration with key stakeholders. This role involves significant responsibility for the architecture and management of Abbott's strategic cloud platforms and AI/AA programs, enabling faster, better, and more informed decision-making within the business. Minimum Education Master in relevant field (e.g., computer science, electrical engineering) Minimum Experience/Training Required At least 3-5 years of relevant experience, with a strong track record in building solutions/applications using AWS services Proven ability to work across structured, semi-structured, and unstructured data, extracting information and identifying linkages across disparate data sets. Proficiency in multiple programming languages – Javascript, Python, Scala, PySpark or Java. Extensive knowledge and experience with various database technologies, including distributed processing frameworks, relational databases, MPP databases, and NoSQL data stores. Deep understanding of Information Security principles to ensure compliant handling and management of data. Significant experience with cloud platforms, preferably AWS and its ecosystem. Advanced knowledge of development in CICD (Continuous Integration and Continuous Delivery) environments. Strong background in data warehousing / ETL tools. Proficiency in DevOps practices and tools such as Jenkins, Terraform, etc. Proficiency in serverless architecture and services like AWS Lambda. Understanding of security best practices and implementation in cloud environments. Ability to understand business objectives and create cloud-based solutions to meet those objectives. Result-driven, analytical, and creative thinker. Proven ability to work with cross-functional teams and bridge the gap between business and data science. Fluency in English is a must; additional languages are a plus. Additional Technical Skills Experience with front-end frameworks preferably React JS. Knowledge of back-end frameworks like Django, Flask, or Node.js. Familiarity with database technologies such as RedShift, MySQL, or DynamoDB. Understanding of RESTful API design and development. Experience with version control systems like CodeCommit. Show more Show less
Posted 6 days ago
6.0 - 10.0 years
0 Lacs
Pune, Maharashtra, India
On-site
Summary Position Summary Job Title: Senior Data Scientist/Team Lead Job Summary: We are seeking a Senior Data Scientist with hand-on experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and lead a team to implement data-driven solutions. Key Responsibilities: Lead and deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Lead a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 6-10 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field Led a 3-5 member team on multiple end to end DS/ML projects Excellent communication and client/stakeholder management skills Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300022 Show more Show less
Posted 6 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
PySpark, a powerful data processing framework built on top of Apache Spark and Python, is in high demand in the job market in India. With the increasing need for big data processing and analysis, companies are actively seeking professionals with PySpark skills to join their teams. If you are a job seeker looking to excel in the field of big data and analytics, exploring PySpark jobs in India could be a great career move.
Here are 5 major cities in India where companies are actively hiring for PySpark roles: 1. Bangalore 2. Pune 3. Hyderabad 4. Mumbai 5. Delhi
The estimated salary range for PySpark professionals in India varies based on experience levels. Entry-level positions can expect to earn around INR 6-8 lakhs per annum, while experienced professionals can earn upwards of INR 15 lakhs per annum.
In the field of PySpark, a typical career progression may look like this: 1. Junior Developer 2. Data Engineer 3. Senior Developer 4. Tech Lead 5. Data Architect
In addition to PySpark, professionals in this field are often expected to have or develop skills in: - Python programming - Apache Spark - Big data technologies (Hadoop, Hive, etc.) - SQL - Data visualization tools (Tableau, Power BI)
Here are 25 interview questions you may encounter when applying for PySpark roles:
As you explore PySpark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this field and advance your career in the world of big data and analytics. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.