Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 years
0 Lacs
India
On-site
Coursera was launched in 2012 by Andrew Ng and Daphne Koller, with a mission to provide universal access to world-class learning. It is now one of the largest online learning platforms in the world, with 175 million registered learners as of March 31, 2025. Coursera partners with over 350 leading universities and industry leaders to offer a broad catalog of content and credentials, including courses, Specializations, Professional Certificates, and degrees. Coursera’s platform innovations enable instructors to deliver scalable, personalized, and verified learning experiences to their learners. Institutions worldwide rely on Coursera to upskill and reskill their employees, citizens, and students in high-demand fields such as GenAI, data science, technology, and business. Coursera is a Delaware public benefit corporation and a B Corp. Join us in our mission to create a world where anyone, anywhere can transform their life through access to education. We're seeking talented individuals who share our passion and drive to revolutionize the way the world learns. At Coursera, we are committed to building a globally diverse team and are thrilled to extend employment opportunities to individuals in any country where we have a legal entity. We require candidates to possess eligible working rights and have a compatible timezone overlap with their team to facilitate seamless collaboration. Coursera has a commitment to enabling flexibility and workspace choices for employees. Our interviews and onboarding are entirely virtual, providing a smooth and efficient experience for our candidates. As an employee, we enable you to select your main way of working, whether it's from home, one of our offices or hubs, or a co-working space near you. About The Role Coursera is seeking a highly skilled and motivated Senior AI Specialist to join our team. This individual will play a pivotal role in developing and deploying advanced AI solutions that enhance our platform and transform the online learning experience. The ideal candidate has 5–8 years of experience , combining deep technical expertise with strong leadership and collaboration skills. This is a unique opportunity to work on cutting-edge projects in AI/ML, including recommendation systems, predictive analytics, and content optimization. We’re looking for someone who is not only a strong individual contributor but also capable of mentoring others and influencing technical direction across teams. Key Responsibilities Deploy and customize AI/ML solutions using platforms such as Google AI, AWS SageMaker, and other cloud-based tools. Design, implement, and optimize models for predictive analytics, semantic parsing, topic modeling, and information extraction. Enhance customer journey analytics to identify actionable insights and improve user experience across Coursera’s platform. Build and maintain AI pipelines for data ingestion, curation, training, evaluation, and model monitoring. Conduct advanced data preprocessing and cleaning to ensure high-quality model inputs. Analyze large-scale datasets (e.g., customer reviews, usage logs) to improve recommendation systems and platform features. Evaluate and improve the quality of video and audio content using AI-based techniques. Collaborate cross-functionally with product, engineering, and data teams to integrate AI solutions into user-facing applications. Support and mentor team members in AI/ML best practices and tools. Document workflows, architectures, and troubleshooting steps to support long-term scalability and knowledge sharing. Stay current with emerging AI/ML trends and technologies, advocating for their adoption where applicable. Qualifications Education Bachelor’s degree in Computer Science, Machine Learning, or a related technical field (required). Master’s or PhD preferred. Experience 5–8 years of experience in AI/ML development with a strong focus on building production-grade models and pipelines. Proven track record in deploying scalable AI solutions using platforms like Google Vertex AI, AWS SageMaker, Microsoft Azure, or Databricks. Strong experience with backend integration, API development, and cloud-native services. Technical Skills Programming: Advanced proficiency in Python (including libraries like TensorFlow, PyTorch, Scikit-learn). Familiarity with Java or similar languages is a plus. Data Engineering: Expertise in handling large datasets using PySpark, AWS Glue, Apache Airflow, and S3. Databases: Solid experience with both SQL (PostgreSQL, MySQL) and NoSQL (MongoDB, DynamoDB) systems. Cloud: Hands-on experience with cloud platforms (AWS, GCP) and tools like Vertex AI, SageMaker, BigQuery, Lambda, etc. Soft Skills & Leadership Attributes (Senior Engineer Level) Technical leadership: Ability to drive end-to-end ownership of AI/ML projects—from design through deployment and monitoring. Collaboration: Skilled at working cross-functionally with product managers, engineers, and stakeholders to align on priorities and deliver impactful solutions. Mentorship: Experience mentoring junior engineers and fostering a culture of learning and growth within the team. Communication: Clear communicator who can explain complex technical concepts to non-technical stakeholders. Problem-solving: Proactive in identifying challenges and proposing scalable, maintainable solutions. Adaptability: Comfortable working in a fast-paced, evolving environment with changing priorities and goals. Coursera is an Equal Employment Opportunity Employer and considers all qualified applicants without regard to race, color, religion, sex, sexual orientation, gender identity, age, marital status, national origin, protected veteran status, disability, or any other legally protected class. If you are an individual with a disability and require a reasonable accommodation to complete any part of the application process, please contact us at accommodations@coursera.org. For California Candidates, please review our CCPA Applicant Notice here. For our Global Candidates, please review our GDPR Recruitment Notice here. Show more Show less
Posted 1 week ago
4.0 - 6.0 years
6 - 8 Lacs
Gurgaon
On-site
Ahom Technologies Pvt Ltd is looking for Python Developers Who we are AHOM Technologies Private Limited is a specialized Web Development Company based out at Gurgaon, India. We provide high quality and professional software services to the clients residing across the globe. Our professionals have been working with clients of India as well as from International origin. Based in Gurugram, India, we have a proven track record of catering to clients across the globe, including the USA, UK, and Australia. Our team of experts brings extensive experience in providing top-notch solutions to diverse clientele, ensuring excellence in every project What you’ll be doing We are seeking an experienced Python Developer with a strong background in Databricks to join our data engineering and analytics team. The ideal candidate will play a key role in building and maintaining scalable data pipelines and analytical platforms using Python and Databricks, with an emphasis on performance and cloud integration. You will be responsible for: · Design, develop, and maintain scalable Python applications for data processing and analytics. · Build and manage ETL pipelines using Databricks on Azure/AWS cloud platforms. · Collaborate with analysts and other developers to understand business requirements and implement data-driven solutions. · Optimize and monitor existing data workflows to improve performance and scalability. · Write clean, maintainable, and testable code following industry best practices. · Participate in code reviews and provide constructive feedback. · Maintain documentation and contribute to project planning and reporting. What skills & experience you’ll bring to us · Bachelor's degree in Computer Science, Engineering, or related field · Prior experience as a Python Developer or similar role, with a strong portfolio showcasing your past projects. · 4-6 years of Python experience · Strong proficiency in Python programming. · Hands-on experience with Databricks platform (Notebooks, Delta Lake, Spark jobs, cluster configuration, etc.). · Good knowledge of Apache Spark and its Python API (PySpark). · Experience with cloud platforms (preferably Azure or AWS) and working with Databricks on cloud. · Familiarity with data pipeline orchestration tools (e.g., Airflow, Azure Data Factory, etc.). · Strong understanding of database systems (SQL/NoSQL) and data modeling. · Strong communication skills and ability to collaborate effectively with cross-functional teams Want to apply? Get in touch today We’re always excited to hear from passionate individuals ready to make a difference and join our team, we’d love to connect. Reach out to us through our email: shubhangi.chandani@ahomtech.com and hr@ahomtech.com — and let’s start the conversation. *Immediate joiners need only apply *Candidates from Delhi NCR are preferred Job Type: Full-time Pay: ₹600,000.00 - ₹800,000.00 per year Benefits: Provident Fund Schedule: Day shift Application Question(s): We want to fill this position urgently. Are you an immediate joiner? Do you have hands-on experience with Databricks platform (Notebooks, Delta Lake, Spark jobs, cluster configuration, etc.)? Do you have experience with cloud platforms (preferably Azure or AWS) and working with Databricks on cloud? Work Location: In person Application Deadline: 15/06/2025 Expected Start Date: 18/06/2025
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Full-time Job Description Responsible to assemble large, complex sets of data that meet non-functional and functional business requirements. Responsible to identify, design and implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes. Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using Azure, Databricks and SQL technologies Responsible for the transformation of conceptual algorithms from R&D into efficient, production ready code. The data developer must have a strong mathematical background in order to be able to document and maintain the code Responsible for integrating finished models into larger data processes using UNIX scripting languages such as ksh, Python, Spark, Scala, etc. Produce and maintain documentation for released data sets, new programs, shared utilities, or static data. This must be done within department standards Ensure quality deliverables to clients by following existing quality processes, manually calculating comparison data, developing statistical pass/fail testing, and visually inspecting data for reasonableness: the requirement is on-time with zero defects Qualifications Education/Training B.E./B.Tech. with a major in Computer Science, BIS, CIS, Electrical Engineering, Operations Research or some other technical field. Course work or experience in Numerical Analysis, Mathematics or Statistics is a plus Hard Skills Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Programming experience in Python, SQL, Scala Direct experience of building data pipelines using Apache Spark (preferably in Databricks), Airflow. Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake Experience with big data technologies (Hadoop) Databricks & Azure Big Data Architecture Certification would be plus Must be team oriented with strong collaboration, prioritization, and adaptability skills required Ability to write highly efficient code in terms of performance / memory utilization Basic knowledge of SQL; capable of handling common functions Experience Minimum 5 -8 year of experience as Data engineer Experience modeling or manipulating large amounts of data is a plus Experience with Demographic, Retail business is a plus Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion I'm interested I'm interested Privacy Policy Show more Show less
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Explore your next opportunity at a Fortune Global 500 organization. Envision innovative possibilities, experience our rewarding culture, and work with talented teams that help you become better every day. We know what it takes to lead UPS into tomorrow—people with a unique combination of skill + passion. If you have the qualities and drive to lead yourself or teams, there are roles ready to cultivate your skills and take you to the next level. Job Description: The Information Technology (IT) Associate is responsible for writing and creating programs and developing, writing, executing, and analyzing unit test plans for software applications and projects. He/She works closely with project teams throughout testing phases. The IT Associate provides needs assessments and analyzes business requirements. Technology: Python and databricks skills. In addition to that SQL skills. Interest in AI and analytics Employee Type: Permanent UPS is committed to providing a workplace free of discrimination, harassment, and retaliation. Show more Show less
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
At Nielsen, we are passionate about our work to power a better media future for all people by providing powerful insights that drive client decisions and deliver extraordinary results. Our talented, global workforce is dedicated to capturing audience engagement with content - wherever and whenever it’s consumed. Together, we are proudly rooted in our deep legacy as we stand at the forefront of the media revolution. When you join Nielsen, you will join a dynamic team committed to excellence, perseverance, and the ambition to make an impact together. We champion you, because when you succeed, we do too. We enable your best to power our future. Roles and Responsibilities: Lead the design, development, and implementation of AI/ML-based solutions across various product lines Collaborate with product managers, data engineers, and architects to translate business requirements into data science problems and solutions Take ownership of end-to-end AI/ML modules, from data processing to model development, testing, and deployment Provide technical leadership to a team of data scientists, ensuring high-quality outputs and adherence to best practices Conduct cutting-edge research and capability building across the latest Machine Learning, Deep Learning, and AI technologies Prepare technical documentation, including high-level and low-level design, requirement specifications, and white papers Evaluate and fine-tune models, ensuring they meet performance requirements and deliver insights that drive product improvements Production exposure to Large Language Models (LLM) and experience in implementing and optimizing LLM-based solutions Must-have Skills: 5-10 years of experience in Data Science and AI/ML product development, with a proven track record of leading technical teams Expertise in machine learning algorithms, Deep Learning models, Natural Language Processing, and Anomaly Detection Strong understanding of model lifecycle management, including model building, evaluation, and optimization Hands-on experience with Python and proficiency with frameworks like TensorFlow, Keras, PyTorch, etc Solid understanding of SQL, NoSQL databases, and data modeling with ElasticSearch experience Ability to manage multiple projects simultaneously in a fast-paced, agile environment Excellent problem-solving skills and communication abilities, particularly in documenting and presenting technical concepts Familiarity with Big Data frameworks such as Spark, Storm, Databricks, and Kafka Experience with container technologies like Docker and orchestration tools like Kubernetes, ECS, or EKS Optional (Good To Have) Skills: Experience with cloud-based machine learning platforms like AWS, Azure, or Google Cloud Experience with tools like MLFlow, KubeFlow, or similar for model tracking and orchestration Exposure to NoSQL databases such as MongoDB, Cassandra, Redis, and Cosmos DB, and familiarity with indexing mechanisms Please be aware that job-seekers may be at risk of targeting by scammers seeking personal data or money. Nielsen recruiters will only contact you through official job boards, LinkedIn, or email with a nielsen.com domain. Be cautious of any outreach claiming to be from Nielsen via other messaging platforms or personal email addresses. Always verify that email communications come from an @ nielsen.com address. If you're unsure about the authenticity of a job offer or communication, please contact Nielsen directly through our official website or verified social media channels. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability, protected veteran status or other characteristics protected by law. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Objectives and Purpose The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs. The Senior Data Engineer will: Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your Key Responsibilities Data Engineering Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes, systems to provide accurate and available data to key stakeholders, downstream systems, & business processes. Mentor and coach staff data engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Support standardization, customization and ad hoc data analysis and develop the mechanisms to ingest, analyze, validate, normalize, and clean data. Write unit/integration/performance test scripts and perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Implement processes and systems to drive data reconciliation and monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes. Lead the evaluation, implementation and deployment of emerging tools and processes for analytic data engineering to improve productivity. Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes. Learn about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics. Solve complex data problems to deliver insights that help achieve business objectives. Implement statistical data quality procedures on new data sources by applying rigorous iterative data analytics. Relationship Building and Collaboration Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Support Data Scientists in data sourcing and preparation to visualize data and synthesize insights of commercial value. Collaborate with AI/ML engineers to create data products for analytics and data scientist team members to improve productivity. Advise, consult, mentor and coach other data and analytic professionals on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Skills And Attributes For Success Technical/Functional Expertise Advanced experience and understanding of data/Big Data, data integration, data modelling, AWS, and cloud technologies. Strong business acumen with knowledge of the Pharmaceutical, Healthcare, or Life Sciences sector is preferred, but not required. Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata. Ability to build and optimize queries (SQL), data sets, 'Big Data' pipelines, and architectures for structured and unstructured data. Experience with or knowledge of Agile Software Development methodologies. Leadership Strategic mindset of thinking above the minor, tactical details and focusing on the long-term, strategic goals of the organization. Advocate of a culture of collaboration and psychological safety. Decision-making and Autonomy Shift from manual decision-making to data-driven, strategic decision-making. Proven track record of applying critical thinking to resolve issues and overcome obstacles. Interaction Proven track record of collaboration and developing strong working relationships with key stakeholders by building trust and being a true business partner. Demonstrated success in collaborating with different IT functions, contractors, and constituents to deliver data solutions that meet standards and security measures. Innovation Passion for re-imagining new solutions, processes, and end-user experience by leveraging digital and disruptive technologies and developing advanced data and analytics solutions. Advocate of a culture of growth mindset, agility, and continuous improvement. Complexity Demonstrates high multicultural sensitivity to lead teams effectively. Ability to coordinate and problem-solve amongst larger teams. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Science, or related field 5+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Cloud platform deployment and tools (e.g., Kubernetes) Relational SQL databases DevOps and continuous integration AWS cloud services and technologies (i.e., Lambda, S3, DMS, Step Functions, Event Bridge, Cloud Watch, RDS) Databricks/ETL IICS/DMS GitHub Event Bridge, Tidal Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Desired Skillsets Masters Degree in Engineering, Computer Science, Data Science, or related field Experience in a global working environment Travel Requirements Access to transportation to attend meetings Ability to fly to meetings regionally and globally EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
5.0 years
8 - 10 Lacs
Chennai
On-site
Job Description Responsible to assemble large, complex sets of data that meet non-functional and functional business requirements. Responsible to identify, design and implement internal process improvements including re-designing infrastructure for greater scalability, optimizing data delivery, and automating manual processes. Building required infrastructure for optimal extraction, transformation and loading of data from various data sources using Azure, Databricks and SQL technologies Responsible for the transformation of conceptual algorithms from R&D into efficient, production ready code. The data developer must have a strong mathematical background in order to be able to document and maintain the code Responsible for integrating finished models into larger data processes using UNIX scripting languages such as ksh, Python, Spark, Scala, etc. Produce and maintain documentation for released data sets, new programs, shared utilities, or static data. This must be done within department standards Ensure quality deliverables to clients by following existing quality processes, manually calculating comparison data, developing statistical pass/fail testing, and visually inspecting data for reasonableness: the requirement is on-time with zero defects Qualifications Education/Training B.E./B.Tech. with a major in Computer Science, BIS, CIS, Electrical Engineering, Operations Research or some other technical field. Course work or experience in Numerical Analysis, Mathematics or Statistics is a plus Hard Skills Proven experience working as a data engineer Highly proficient in using the spark framework (python and/or Scala) Extensive knowledge of Data Warehousing concepts, strategies, methodologies. Programming experience in Python, SQL, Scala Direct experience of building data pipelines using Apache Spark (preferably in Databricks), Airflow. Hands on experience designing and delivering solutions using Azure including Azure Storage, Azure SQL Data Warehouse, Azure Data Lake Experience with big data technologies (Hadoop) Databricks & Azure Big Data Architecture Certification would be plus Must be team oriented with strong collaboration, prioritization, and adaptability skills required Ability to write highly efficient code in terms of performance / memory utilization Basic knowledge of SQL; capable of handling common functions Experience Minimum 5 -8 year of experience as Data engineer Experience modeling or manipulating large amounts of data is a plus Experience with Demographic, Retail business is a plus Additional Information Our Benefits Flexible working environment Volunteer time off LinkedIn Learning Employee-Assistance-Program (EAP) About NIQ NIQ is the world’s leading consumer intelligence company, delivering the most complete understanding of consumer buying behavior and revealing new pathways to growth. In 2023, NIQ combined with GfK, bringing together the two industry leaders with unparalleled global reach. With a holistic retail read and the most comprehensive consumer insights—delivered with advanced analytics through state-of-the-art platforms—NIQ delivers the Full View™. NIQ is an Advent International portfolio company with operations in 100+ markets, covering more than 90% of the world’s population. For more information, visit NIQ.com Want to keep up with our latest updates? Follow us on: LinkedIn | Instagram | Twitter | Facebook Our commitment to Diversity, Equity, and Inclusion NIQ is committed to reflecting the diversity of the clients, communities, and markets we measure within our own workforce. We exist to count everyone and are on a mission to systematically embed inclusion and diversity into all aspects of our workforce, measurement, and products. We enthusiastically invite candidates who share that mission to join us. We are proud to be an Equal Opportunity/Affirmative Action-Employer, making decisions without regard to race, color, religion, gender, gender identity or expression, sexual orientation, national origin, genetics, disability status, age, marital status, protected veteran status or any other protected class. Our global non-discrimination policy covers these protected classes in every market in which we do business worldwide. Learn more about how we are driving diversity and inclusion in everything we do by visiting the NIQ News Center: https://nielseniq.com/global/en/news-center/diversity-inclusion
Posted 1 week ago
3.0 years
0 Lacs
Kochi, Kerala, India
On-site
We are looking for a passionate and skilled Azure Data Engineer to join our team and help design, build, and maintain scalable data solutions on the Azure cloud platform. If you're experienced in Azure Data Factory, Synapse, and Databricks and enjoy solving complex data problems, we’d love to connect! Key Responsibilities Develop and maintain data pipelines using Azure Data Factory , Databricks , and Azure Synapse Analytics Design and implement robust data lake and data warehouse architectures on Azure Write complex SQL and Python scripts for data transformation and analysis Enable CI/CD for data pipelines and monitor pipeline performance Collaborate with data analysts and business stakeholders to build data models and reports Leverage tools like Azure Monitor and Log Analytics for proactive monitoring and debugging Required Qualifications 3+ years of hands-on experience as a Data Engineer working in Azure cloud environments Proficiency in Azure Data Factory, Synapse Analytics, Azure Data Lake (Gen2), Azure SQL , Databricks , and Microsoft Fabric Strong programming skills in SQL , Python , and Spark Experience in implementing CI/CD pipelines for data projects Solid understanding of data modeling, warehousing , and data architecture principles Familiarity with Power BI , Azure Monitor , and Log Analytics Excellent communication and problem-solving skills Preferred Qualifications Microsoft Certified: Azure Data Engineer Associate (DP-203) or similar certification Experience with real-time data processing tools like Azure Stream Analytics or Kafka Exposure to big data platforms and large-scale analytics systems Understanding of data governance and experience with tools such as Azure Purview , Informatica , or Data Catalog Why Join Us? Work with cutting-edge Azure technologies Opportunity to be part of impactful data-driven projects Collaborative and innovation-focused culture Competitive salary and flexible work environment 📩 Apply now or reach out to us directly to learn more about this exciting opportunity! Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Coimbatore, Tamil Nadu, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Objectives and Purpose The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs. The Senior Data Engineer will: Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your Key Responsibilities Data Engineering Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes, systems to provide accurate and available data to key stakeholders, downstream systems, & business processes. Mentor and coach staff data engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Support standardization, customization and ad hoc data analysis and develop the mechanisms to ingest, analyze, validate, normalize, and clean data. Write unit/integration/performance test scripts and perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Implement processes and systems to drive data reconciliation and monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes. Lead the evaluation, implementation and deployment of emerging tools and processes for analytic data engineering to improve productivity. Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes. Learn about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics. Solve complex data problems to deliver insights that help achieve business objectives. Implement statistical data quality procedures on new data sources by applying rigorous iterative data analytics. Relationship Building and Collaboration Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Support Data Scientists in data sourcing and preparation to visualize data and synthesize insights of commercial value. Collaborate with AI/ML engineers to create data products for analytics and data scientist team members to improve productivity. Advise, consult, mentor and coach other data and analytic professionals on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Skills And Attributes For Success Technical/Functional Expertise Advanced experience and understanding of data/Big Data, data integration, data modelling, AWS, and cloud technologies. Strong business acumen with knowledge of the Pharmaceutical, Healthcare, or Life Sciences sector is preferred, but not required. Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata. Ability to build and optimize queries (SQL), data sets, 'Big Data' pipelines, and architectures for structured and unstructured data. Experience with or knowledge of Agile Software Development methodologies. Leadership Strategic mindset of thinking above the minor, tactical details and focusing on the long-term, strategic goals of the organization. Advocate of a culture of collaboration and psychological safety. Decision-making and Autonomy Shift from manual decision-making to data-driven, strategic decision-making. Proven track record of applying critical thinking to resolve issues and overcome obstacles. Interaction Proven track record of collaboration and developing strong working relationships with key stakeholders by building trust and being a true business partner. Demonstrated success in collaborating with different IT functions, contractors, and constituents to deliver data solutions that meet standards and security measures. Innovation Passion for re-imagining new solutions, processes, and end-user experience by leveraging digital and disruptive technologies and developing advanced data and analytics solutions. Advocate of a culture of growth mindset, agility, and continuous improvement. Complexity Demonstrates high multicultural sensitivity to lead teams effectively. Ability to coordinate and problem-solve amongst larger teams. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Science, or related field 5+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Cloud platform deployment and tools (e.g., Kubernetes) Relational SQL databases DevOps and continuous integration AWS cloud services and technologies (i.e., Lambda, S3, DMS, Step Functions, Event Bridge, Cloud Watch, RDS) Databricks/ETL IICS/DMS GitHub Event Bridge, Tidal Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Desired Skillsets Masters Degree in Engineering, Computer Science, Data Science, or related field Experience in a global working environment Travel Requirements Access to transportation to attend meetings Ability to fly to meetings regionally and globally EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Calcutta
On-site
Project Role : Technology Architect Project Role Description : Review and integrate all application requirements, including functional, security, integration, performance, quality and operations requirements. Review and integrate the technical architecture requirements. Provide input into final decisions regarding hardware, network products, system software and security. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Cloud Data Architecture Minimum 5 year(s) of experience is required Educational Qualification : BE or MCA Summary: As a Technology Architect, you will be responsible for reviewing and integrating all application requirements, including functional, security, integration, performance, quality, and operations requirements. Your typical day will involve reviewing and integrating technical architecture requirements, providing input into final decisions regarding hardware, network products, system software, and security, and utilizing Databricks Unified Data Analytics Platform to deliver impactful data-driven solutions. Roles & Responsibilities: 6 or more years of experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform. 2 or more years of experience using Python, PySpark or Scala. Experience in Databricks on cloud. Exp in any of AWS, Azure or GCPe, ETL, data engineering, data cleansing and insertion into a data warehouse Must have Skills like Databricks, Cloud Data Architecture, Python Programming Language, Data Engineering. Professional Attributes: Excellent writing, communication and presentation skills. Eagerness to learn and develop self on an ongoing basis. Excellent client facing and interpersonal skills. BE or MCA
Posted 1 week ago
2.0 - 6.0 years
0 Lacs
Gurgaon, Haryana, India
On-site
Rate indications EXL (NASDAQ:EXLS) is a leading operations management and analytics company that helps businesses enhance growth and profitability in the face of relentless competition and continuous disruption. Using our proprietary, award-winning Business EXLerator Framework™, which integrates analytics, automation, benchmarking, BPO, consulting, industry best practices and technology platforms, we look deeper to help companies improve global operations, enhance data-driven insights, increase customer satisfaction, and manage risk and compliance. EXL serves the insurance, healthcare, banking and financial services, utilities, travel, transportation and logistics industries. Headquartered in New York, New York, EXL has more than 24,000 professionals in locations throughout the United States, Europe, Asia (primarily India and Philippines), Latin America, Australia and South Africa. EXL Analytics provides data-driven, action-oriented solutions to business problems through statistical data mining, cutting edge analytics techniques and a consultative approach. Leveraging proprietary methodology and best-of-breed technology, EXL Analytics takes an industry-specific approach to transform our clients’ decision making and embed analytics more deeply into their business processes. Our global footprint of nearly 2,000 data scientists and analysts assist client organizations with complex risk minimization methods, advanced marketing, pricing and CRM strategies, internal cost analysis, and cost and resource optimization within the organization. EXL Analytics serves the insurance, healthcare, banking, capital markets, utilities, retail and e-commerce, travel, transportation and logistics industries. Please visit www.exlservice.com for more information about EXL Analytics. Role & Responsibilities Overview Collaborate with the Rate-filing Team to analyze and estimate reserves for our P&C insurance products by state, including performing triangle-based loss reserve reviews and analyses. Analyze data and perform actuarial calculations, generate state-specific filing indications for DOI submissions (Auto & Home) Support monthly/quarterly rate updates and processes (rate files, RPC, forecasts) Assist in the development and enhancement of rate-filing tools, models, and processes to improve accuracy and efficiency. Provide support in the preparation of financial reports, including reserve-related disclosures. Stay updated with best practices in actuarial methodologies and techniques. Mentor and provide guidance to junior team members as needed. Candidate Profile Bachelor’s/Master's degree in engineering, economics, mathematics, actuarial sciences or statistics. Affiliation to IAI or IFOA, with 2-6 CT actuarial exams will be an added advantage 2-6 years Actuarial experience in the P&C insurance industry Good knowledge of insurance terms Advanced skills in Excel, Python, SQL, and other relevant tools for data analysis and modeling. Experience in Databricks is good to have Excellent analytical and problem-solving skills, with the ability to analyze complex data and make data-driven decisions. Strong communication skills, including the ability to effectively communicate actuarial concepts to both technical and non-technical stakeholders Ability to work independently and collaboratively in a team-oriented environment Detail-oriented with strong organizational and time management skills Ability to adapt to changing priorities and deadlines in a fast-paced environment What We Offer EXL Analytics offers an exciting, fast paced and innovative environment, which brings together a group of sharp and entrepreneurial professionals who are eager to influence business decisions. From your very first day, you get an opportunity to work closely with highly experienced, world class analytics consultants. You can expect to learn many aspects of businesses that our clients engage in. You will also learn effective teamwork and time-management skills - key aspects for personal and professional growth Analytics requires different skill sets at different levels within the organization. At EXL Analytics, we invest heavily in training you in all aspects of analytics as well as in leading analytical tools and techniques. We provide guidance/ coaching to every employee through our mentoring program wherein every junior level employee is assigned a senior level professional as advisors. Sky is the limit for our team members. The unique experiences gathered at EXL Analytics sets the stage for further growth and development in our company and beyond. Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Andhra Pradesh
On-site
Business Analytics Lead Analyst - HIH - Evernorth About Evernorth : Evernorth Health Services, a division of The Cigna Group (NYSE: CI), creates pharmacy, care, and benefits solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention, and treatment of illness and disease more accessible to millions of people. Business Analytics Lead Analyst (Dashboarding) The job profile for this position is Business Analytics Lead Analyst. The Customer Experience & Operations Enablement Analytics organization offers solutions that provide data, reporting, and actionable insights to internal/external business partners to improve customer experience, reduce cost, measure business performance, and inform business decisions. The Business Analytics Lead Analyst will be responsible for dashboard and report creation as well as the ability to pull data to meet adhoc measurement needs. The individual will be able to create prototypes of reporting needs, and support manual report/scorecard creation where needed when automated dashboards are not feasible. The analytics lead analyst will be comfortable working directly with the Operations teams to learn about their process and where the data and reporting fits in. Looking for candidates that can work directly with operations team members to understand requirements and do their own development and testing. Responsibilities Include: Using SQL to write queries to answer questions and perform ETL tasks to create datasets. Utilizing Tableau or other similar Data Visualization tools to automate scorecards and reports Using Business Intelligence tools to create self-service reporting for business partners. Conducting self-driven data exploration and documentation of tables, schemas, and tests. Using SQL to query data structures to help inform our business partners. Examining and interpreting the data to discover the weaknesses and identify the root causes Completing ad hoc requests for business partners data needs. Identifying and implementing automation to consolidate similar or repeated ad hoc requests. Understanding business needs to better inform reporting and analytics duties. Giving guidance on any recurring problems or issues Completing proposals in cooperation and conjunction with experts on the subject (SME). Refactoring reporting to enhance performance, provide deeper insight, and answer questions. Updating project documents as well as status reports. Qualifications: Required experience: 5 -8 years of relevant analytics experience with focus on Proficiency with Structured Query Language (SQL) and Oracle. Experience with Business Intelligence Software (Tableau, PowerBI, Looker, etc.) 3-5 years of experience with: Scripting language (Python, Powershell, VBA). Big Data Platforms (Databricks, Hadoop, AWS). Excellent verbal, written and interpersonal communication skills a must. Problem-solving, consulting skills, teamwork, leadership, and creativity skills a must. Analytical mind with outstanding ability to collect and analyze data. Expertise in contact center or workforce planning operations preferred. Proficiency in Agile practices (Jira) preferred. About Evernorth Health Services Evernorth Health Services, a division of The Cigna Group, creates pharmacy, care and benefit solutions to improve health and increase vitality. We relentlessly innovate to make the prediction, prevention and treatment of illness and disease more accessible to millions of people. Join us in driving growth and improving lives.
Posted 1 week ago
15.0 years
0 Lacs
India
Remote
Company Description Simbus Technologies empowers businesses through results-driven IT consulting services with a strong focus on Kinaxis Maestro and Databricks platforms. With 15 years of expertise in Supply Chain Planning, we deliver solutions that enhance agility, boost efficiency, and enable smarter decision-making. Our strategic blend of supply chain and data capabilities—from Kinaxis Maestro implementations to advanced data engineering and AI/ML integration with Databricks—ensures seamless digital transformation. We partner with clients to innovate, streamline, and grow their operations. Role Description This is a part-time remote role for a Databricks Technical Advisor. The role involves providing technical guidance and support for Scaling Up our Databricks COE. You will be responsible for designing recruitment process, technical training and also assisting Simbus on select client engagements in the areas of Solution Architecture. The role entails an investment of approx. 10 hours per week and shall carry attractive compensation. The role can be converted to a full-time role on mutual agreement. Qualifications Strong Databricks Skills combined with Excellent Analytical Skills for data analysis and problem-solving Excellent Communication and Coaching skills to effectively interact with COE Team mates Experience with cloud data engineering and migration is beneficial Understanding of real-time analytics and business intelligence Knowledge of machine learning and AI integration Bachelor's degree in Computer Science, Information Technology, or related field Show more Show less
Posted 1 week ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities The ability to be a team player The ability and skill to train other people in procedural and technical topics Strong communication and collaboration skills Preferred Education Master's Degree Required Technical And Professional Expertise Able to write complex SQL queries Having experience in Azure Databricks Preferred Technical And Professional Experience Excellent communication and stakeholder management skills Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Kochi, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Objectives and Purpose The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs. The Senior Data Engineer will: Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your Key Responsibilities Data Engineering Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes, systems to provide accurate and available data to key stakeholders, downstream systems, & business processes. Mentor and coach staff data engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Support standardization, customization and ad hoc data analysis and develop the mechanisms to ingest, analyze, validate, normalize, and clean data. Write unit/integration/performance test scripts and perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Implement processes and systems to drive data reconciliation and monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes. Lead the evaluation, implementation and deployment of emerging tools and processes for analytic data engineering to improve productivity. Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes. Learn about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics. Solve complex data problems to deliver insights that help achieve business objectives. Implement statistical data quality procedures on new data sources by applying rigorous iterative data analytics. Relationship Building and Collaboration Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Support Data Scientists in data sourcing and preparation to visualize data and synthesize insights of commercial value. Collaborate with AI/ML engineers to create data products for analytics and data scientist team members to improve productivity. Advise, consult, mentor and coach other data and analytic professionals on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Skills And Attributes For Success Technical/Functional Expertise Advanced experience and understanding of data/Big Data, data integration, data modelling, AWS, and cloud technologies. Strong business acumen with knowledge of the Pharmaceutical, Healthcare, or Life Sciences sector is preferred, but not required. Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata. Ability to build and optimize queries (SQL), data sets, 'Big Data' pipelines, and architectures for structured and unstructured data. Experience with or knowledge of Agile Software Development methodologies. Leadership Strategic mindset of thinking above the minor, tactical details and focusing on the long-term, strategic goals of the organization. Advocate of a culture of collaboration and psychological safety. Decision-making and Autonomy Shift from manual decision-making to data-driven, strategic decision-making. Proven track record of applying critical thinking to resolve issues and overcome obstacles. Interaction Proven track record of collaboration and developing strong working relationships with key stakeholders by building trust and being a true business partner. Demonstrated success in collaborating with different IT functions, contractors, and constituents to deliver data solutions that meet standards and security measures. Innovation Passion for re-imagining new solutions, processes, and end-user experience by leveraging digital and disruptive technologies and developing advanced data and analytics solutions. Advocate of a culture of growth mindset, agility, and continuous improvement. Complexity Demonstrates high multicultural sensitivity to lead teams effectively. Ability to coordinate and problem-solve amongst larger teams. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Science, or related field 5+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Cloud platform deployment and tools (e.g., Kubernetes) Relational SQL databases DevOps and continuous integration AWS cloud services and technologies (i.e., Lambda, S3, DMS, Step Functions, Event Bridge, Cloud Watch, RDS) Databricks/ETL IICS/DMS GitHub Event Bridge, Tidal Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Desired Skillsets Masters Degree in Engineering, Computer Science, Data Science, or related field Experience in a global working environment Travel Requirements Access to transportation to attend meetings Ability to fly to meetings regionally and globally EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology Your Role And Responsibilities Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform. Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation. Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Participate in delivery reviews / product reviews, quality assurance and work as design authority Preferred Education Non-Degree Program Required Technical And Professional Expertise Experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems Experience in data engineering and architecting data platforms. Experience in architecting and implementing Data Platforms Azure Cloud Platform Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred Technical And Professional Experience Experience in architecting complex data platforms on Azure Cloud Platform and On-Prem Experience and exposure to implementation of Data Fabric and Data Mesh concepts and solutions like Microsoft Fabric or Starburst or Denodo or IBM Data Virtualisation or Talend or Tibco Data Fabric Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Objectives and Purpose The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs. The Senior Data Engineer will: Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your Key Responsibilities Data Engineering Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes, systems to provide accurate and available data to key stakeholders, downstream systems, & business processes. Mentor and coach staff data engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Support standardization, customization and ad hoc data analysis and develop the mechanisms to ingest, analyze, validate, normalize, and clean data. Write unit/integration/performance test scripts and perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Implement processes and systems to drive data reconciliation and monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes. Lead the evaluation, implementation and deployment of emerging tools and processes for analytic data engineering to improve productivity. Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes. Learn about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics. Solve complex data problems to deliver insights that help achieve business objectives. Implement statistical data quality procedures on new data sources by applying rigorous iterative data analytics. Relationship Building and Collaboration Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Support Data Scientists in data sourcing and preparation to visualize data and synthesize insights of commercial value. Collaborate with AI/ML engineers to create data products for analytics and data scientist team members to improve productivity. Advise, consult, mentor and coach other data and analytic professionals on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Skills And Attributes For Success Technical/Functional Expertise Advanced experience and understanding of data/Big Data, data integration, data modelling, AWS, and cloud technologies. Strong business acumen with knowledge of the Pharmaceutical, Healthcare, or Life Sciences sector is preferred, but not required. Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata. Ability to build and optimize queries (SQL), data sets, 'Big Data' pipelines, and architectures for structured and unstructured data. Experience with or knowledge of Agile Software Development methodologies. Leadership Strategic mindset of thinking above the minor, tactical details and focusing on the long-term, strategic goals of the organization. Advocate of a culture of collaboration and psychological safety. Decision-making and Autonomy Shift from manual decision-making to data-driven, strategic decision-making. Proven track record of applying critical thinking to resolve issues and overcome obstacles. Interaction Proven track record of collaboration and developing strong working relationships with key stakeholders by building trust and being a true business partner. Demonstrated success in collaborating with different IT functions, contractors, and constituents to deliver data solutions that meet standards and security measures. Innovation Passion for re-imagining new solutions, processes, and end-user experience by leveraging digital and disruptive technologies and developing advanced data and analytics solutions. Advocate of a culture of growth mindset, agility, and continuous improvement. Complexity Demonstrates high multicultural sensitivity to lead teams effectively. Ability to coordinate and problem-solve amongst larger teams. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Science, or related field 5+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Cloud platform deployment and tools (e.g., Kubernetes) Relational SQL databases DevOps and continuous integration AWS cloud services and technologies (i.e., Lambda, S3, DMS, Step Functions, Event Bridge, Cloud Watch, RDS) Databricks/ETL IICS/DMS GitHub Event Bridge, Tidal Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Desired Skillsets Masters Degree in Engineering, Computer Science, Data Science, or related field Experience in a global working environment Travel Requirements Access to transportation to attend meetings Ability to fly to meetings regionally and globally EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
8.0 - 13.0 years
10 - 15 Lacs
Pune
Work from Office
We are looking for a highly skilled and experienced Data Engineering Manager to lead our data engineering team. The ideal candidate will possess a strong technical background, strong project management abilities, and excellent client handling/stakeholder management skills. This role requires a strategic thinker who can drive the design, development and implementation of data solutions that meet our clients needs while ensuring the highest standards of quality and efficiency. Job Responsibilities Technology Leadership Lead guide the team independently or with little support to design, implement deliver complex cloud-based data engineering / data warehousing project assignments Solution Architecture & Review Expertise in conceptualizing solution architecture and low-level design in a range of data engineering (Matillion, Informatica, Talend, Python, dbt, Airflow, Apache Spark, Databricks, Redshift) and cloud hosting (AWS, Azure) technologies Managing projects in fast paced agile ecosystem and ensuring quality deliverables within stringent timelines Responsible for Risk Management, maintaining the Risk documentation and mitigations plan. Drive continuous improvement in a Lean/Agile environment, implementing DevOps delivery approaches encompassing CI/CD, build automation and deployments. Communication & Logical Thinking Demonstrates strong analytical skills, employing a systematic and logical approach to data analysis, problem-solving, and situational assessment. Capable of effectively presenting and defending team viewpoints, while securing buy-in from both technical and client stakeholders. Handle Client Relationship Manage client relationship and client expectations independently. Should be able to deliver results back to the Client independently. Should have excellent communication skills. Education BE/B.Tech Master of Computer Application Work Experience Should have expertise and 8+ years of working experience in at least two ETL tools among Matillion, dbt, pyspark, Informatica, and Talend Should have expertise and working experience in at least two databases among Databricks, Redshift, Snowflake, SQL Server, Oracle Should have strong Data Warehousing, Data Integration and Data Modeling fundamentals like Star Schema, Snowflake Schema, Dimension Tables and Fact Tables. Strong experience on SQL building blocks. Creating complex SQL queries and Procedures. Experience in AWS or Azure cloud and its service offerings Aware of techniques such as: Data Modelling, Performance tuning and regression testing Willingness to learn and take ownership of tasks. Excellent written/verbal communication and problem-solving skills and Understanding and working experience on Pharma commercial data sets like IQVIA, Veeva, Symphony, Liquid Hub, Cegedim etc. would be an advantage Hands-on in scrum methodology (Sprint planning, execution and retrospection) Behavioural Competencies Teamwork & Leadership Motivation to Learn and Grow Ownership Cultural Fit Talent Management Technical Competencies Problem Solving Lifescience Knowledge Communication Agile PySpark Data Modelling Designing technical architecture AWS Data Pipeline
Posted 1 week ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Objectives and Purpose The Senior Data Engineer ingests, builds, and supports large-scale data architectures that serve multiple downstream systems and business users. This individual supports the Data Engineer Leads and partners with Visualization on data quality and troubleshooting needs. The Senior Data Engineer will: Clean, aggregate, and organize data from disparate sources and transfer it to data warehouses. Support development testing and maintenance of data pipelines and platforms, to enable data quality to be utilized within business dashboards and tools. Create, maintain, and support the data platform and infrastructure that enables the analytics front-end; this includes the testing, maintenance, construction, and development of architectures such as high-volume, large-scale data processing and databases with proper verification and validation processes. Your Key Responsibilities Data Engineering Develop and maintain scalable data pipelines, in line with ETL principles, and build out new integrations, using AWS native technologies, to support continuing increases in data source, volume, and complexity. Define data requirements, gather, and mine data, while validating the efficiency of data tools in the Big Data Environment. Lead the evaluation, implementation and deployment of emerging tools and processes to improve productivity. Implement processes, systems to provide accurate and available data to key stakeholders, downstream systems, & business processes. Mentor and coach staff data engineers on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Support standardization, customization and ad hoc data analysis and develop the mechanisms to ingest, analyze, validate, normalize, and clean data. Write unit/integration/performance test scripts and perform data analysis required to troubleshoot data related issues and assist in the resolution of data issues. Implement processes and systems to drive data reconciliation and monitor data quality, ensuring production data is always accurate and available for key stakeholders, downstream systems, and business processes. Lead the evaluation, implementation and deployment of emerging tools and processes for analytic data engineering to improve productivity. Develop and deliver communication and education plans on analytic data engineering capabilities, standards, and processes. Learn about machine learning, data science, computer vision, artificial intelligence, statistics, and/or applied mathematics. Solve complex data problems to deliver insights that help achieve business objectives. Implement statistical data quality procedures on new data sources by applying rigorous iterative data analytics. Relationship Building and Collaboration Partner with Business Analytics and Solution Architects to develop technical architectures for strategic enterprise projects and initiatives. Coordinate with Data Scientists to understand data requirements, and design solutions that enable advanced analytics, machine learning, and predictive modelling. Support Data Scientists in data sourcing and preparation to visualize data and synthesize insights of commercial value. Collaborate with AI/ML engineers to create data products for analytics and data scientist team members to improve productivity. Advise, consult, mentor and coach other data and analytic professionals on data standards and practices, promoting the values of learning and growth. Foster a culture of sharing, re-use, design for scale stability, and operational efficiency of data and analytical solutions. Skills And Attributes For Success Technical/Functional Expertise Advanced experience and understanding of data/Big Data, data integration, data modelling, AWS, and cloud technologies. Strong business acumen with knowledge of the Pharmaceutical, Healthcare, or Life Sciences sector is preferred, but not required. Ability to build processes that support data transformation, workload management, data structures, dependency, and metadata. Ability to build and optimize queries (SQL), data sets, 'Big Data' pipelines, and architectures for structured and unstructured data. Experience with or knowledge of Agile Software Development methodologies. Leadership Strategic mindset of thinking above the minor, tactical details and focusing on the long-term, strategic goals of the organization. Advocate of a culture of collaboration and psychological safety. Decision-making and Autonomy Shift from manual decision-making to data-driven, strategic decision-making. Proven track record of applying critical thinking to resolve issues and overcome obstacles. Interaction Proven track record of collaboration and developing strong working relationships with key stakeholders by building trust and being a true business partner. Demonstrated success in collaborating with different IT functions, contractors, and constituents to deliver data solutions that meet standards and security measures. Innovation Passion for re-imagining new solutions, processes, and end-user experience by leveraging digital and disruptive technologies and developing advanced data and analytics solutions. Advocate of a culture of growth mindset, agility, and continuous improvement. Complexity Demonstrates high multicultural sensitivity to lead teams effectively. Ability to coordinate and problem-solve amongst larger teams. To qualify for the role, you must have the following: Essential Skillsets Bachelor’s degree in Engineering, Computer Science, Data Science, or related field 5+ years of experience in software development, data science, data engineering, ETL, and analytics reporting development Experience designing, building, implementing, and maintaining data and system integrations using dimensional data modelling and development and optimization of ETL pipelines Proven track record of designing and implementing complex data solutions Demonstrated understanding and experience using: Data Engineering Programming Languages (i.e., Python) Distributed Data Technologies (e.g., Pyspark) Cloud platform deployment and tools (e.g., Kubernetes) Relational SQL databases DevOps and continuous integration AWS cloud services and technologies (i.e., Lambda, S3, DMS, Step Functions, Event Bridge, Cloud Watch, RDS) Databricks/ETL IICS/DMS GitHub Event Bridge, Tidal Understanding of database architecture and administration Utilizes the principles of continuous integration and delivery to automate the deployment of code changes to elevate environments, fostering enhanced code quality, test coverage, and automation of resilient test cases Processes high proficiency in code programming languages (e.g., SQL, Python, Pyspark, AWS services) to design, maintain, and optimize data architecture/pipelines that fit business goals Strong organizational skills with the ability to manage multiple projects simultaneously and operate as a leading member across globally distributed teams to deliver high-quality services and solutions Excellent written and verbal communication skills, including storytelling and interacting effectively with multifunctional teams and other strategic partners Strong problem solving and troubleshooting skills Ability to work in a fast-paced environment and adapt to changing business priorities Desired Skillsets Masters Degree in Engineering, Computer Science, Data Science, or related field Experience in a global working environment Travel Requirements Access to transportation to attend meetings Ability to fly to meetings regionally and globally EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
Job Family Data Science & Analysis (India) Travel Required None Clearance Required None What You Will Do Design, develop, and maintain robust, scalable, and efficient data pipelines and ETL/ELT processes. Lead and execute data engineering projects from inception to completion, ensuring timely delivery and high quality. Build and optimize data architectures for operational and analytical purposes. Collaborate with cross-functional teams to gather and define data requirements. Implement data quality, data governance, and data security practices. Manage and optimize cloud-based data platforms (Azure\AWS). Develop and maintain Python/PySpark libraries for data ingestion, Processing and integration with both internal and external data sources. Design and optimize scalable data pipelines using Azure data factory and Spark(Databricks) Work with stakeholders, including the Executive, Product, Data and Design teams to assist with data-related technical issues and support their data infrastructure needs. Develop frameworks for data ingestion, transformation, and validation. Mentor junior data engineers and guide best practices in data engineering. Evaluate and integrate new technologies and tools to improve data infrastructure. Ensure compliance with data privacy regulations (HIPAA, etc.). Monitor performance and troubleshoot issues across the data ecosystem. Automated deployment of data pipelines using GIT hub actions \ Azure devops What You Will Need Bachelors or master’s degree in computer science, Information Systems, Statistics, Math, Engineering, or related discipline. Minimum 5 + years of solid hands-on experience in data engineering and cloud services. Extensive working experience with advanced SQL and deep understanding of SQL. Good Experience in Azure data factory (ADF), Databricks , Python and PySpark. Good experience in modern data storage concepts data lake, lake house. Experience in other cloud services (AWS) and data processing technologies will be added advantage. Ability to enhance , develop and resolve defects in ETL process using cloud services. Experience handling large volumes (multiple terabytes) of incoming data from clients and 3rd party sources in various formats such as text, csv, EDI X12 files and access database. Experience with software development methodologies (Agile, Waterfall) and version control tools Highly motivated, strong problem solver, self-starter, and fast learner with demonstrated analytic and quantitative skills. Good communication skill. What Would Be Nice To Have AWS ETL Platform – Glue , S3 One or more programming languages such as Java, .Net Experience in US health care domain and insurance claim processing. What We Offer Guidehouse offers a comprehensive, total rewards package that includes competitive compensation and a flexible benefits package that reflects our commitment to creating a diverse and supportive workplace. About Guidehouse Guidehouse is an Equal Opportunity Employer–Protected Veterans, Individuals with Disabilities or any other basis protected by law, ordinance, or regulation. Guidehouse will consider for employment qualified applicants with criminal histories in a manner consistent with the requirements of applicable law or ordinance including the Fair Chance Ordinance of Los Angeles and San Francisco. If you have visited our website for information about employment opportunities, or to apply for a position, and you require an accommodation, please contact Guidehouse Recruiting at 1-571-633-1711 or via email at RecruitingAccommodation@guidehouse.com. All information you provide will be kept confidential and will be used only to the extent required to provide needed reasonable accommodation. All communication regarding recruitment for a Guidehouse position will be sent from Guidehouse email domains including @guidehouse.com or guidehouse@myworkday.com. Correspondence received by an applicant from any other domain should be considered unauthorized and will not be honored by Guidehouse. Note that Guidehouse will never charge a fee or require a money transfer at any stage of the recruitment process and does not collect fees from educational institutions for participation in a recruitment event. Never provide your banking information to a third party purporting to need that information to proceed in the hiring process. If any person or organization demands money related to a job opportunity with Guidehouse, please report the matter to Guidehouse’s Ethics Hotline. If you want to check the validity of correspondence you have received, please contact recruiting@guidehouse.com. Guidehouse is not responsible for losses incurred (monetary or otherwise) from an applicant’s dealings with unauthorized third parties. Guidehouse does not accept unsolicited resumes through or from search firms or staffing agencies. All unsolicited resumes will be considered the property of Guidehouse and Guidehouse will not be obligated to pay a placement fee. Show more Show less
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Summary Position Summary Job Title: Senior Data Scientist/Team Lead Job Summary: We are seeking a Senior Data Scientist with hand-on experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and lead a team to implement data-driven solutions. Key Responsibilities: Lead and deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Lead a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 6-10 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field Led a 3-5 member team on multiple end to end DS/ML projects Excellent communication and client/stakeholder management skills Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300022 Show more Show less
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Summary Position Summary Job Title: Senior Data Scientist/Team Lead Job Summary: We are seeking a Senior Data Scientist with hand-on experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and lead a team to implement data-driven solutions. Key Responsibilities: Lead and deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Lead a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 6-10 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field Led a 3-5 member team on multiple end to end DS/ML projects Excellent communication and client/stakeholder management skills Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300022 Show more Show less
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
We are seeking a Data Solution Architect (Azure; Databricks) . In this role, you will leverage your skills in artificial intelligence and machine learning to design robust data analytics solutions. If you are ready to make an impact, apply today! Responsibilities Design data analytics solutions utilizing the big data technology stack Create and present solution architecture documents with technical details Collaborate with business stakeholders to identify solution requirements and key scenarios Conduct solution architecture reviews and audits while calculating and presenting ROI Lead implementation of solutions from establishing project requirements to go-live Engage in pre-sale activities including customer communications and RFP processing Develop proposals and design solutions while presenting architecture to customers Create and follow a personal education plan in technology stack and solution architecture Maintain knowledge of industry trends and best practices Engage new clients to drive business growth in the big data space Requirements Strong hands-on experience as a Big Data developer with a solid design background Experience delivering data analytics projects and architecture guidelines Experience in big data solutions on premises and in the cloud Production project experience in at least one big data technology Knowledge of batch processing frameworks like Hadoop, MapReduce, Spark, or Hive Familiarity with NoSQL databases such as Cassandra, HBase, or Kudu Understanding of Agile development methodology with emphasis on Scrum Experience in direct customer communications and pre-sales consulting Experience working within a consulting environment would be highly valuable Show more Show less
Posted 1 week ago
2.0 - 7.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Summary Position Summary AI & Data In this age of disruption, organizations need to navigate the future with confidence, embracing decision making with clear, data-driven choices that deliver enterprise value in a dynamic business environment. The AI & Data team leverages the power of data, analytics, robotics, science and cognitive technologies to uncover hidden relationships from vast troves of data, generate insights, and inform decision-making. Together with the AI & Engineering (AI&E) practice, our AI & Data offering helps clients transform their business by architecting organizational intelligence programs and differentiated strategies to win in their chosen markets. AI & Data will work with our clients to: Implement large-scale data ecosystems including data management, governance and the integration of structured and unstructured data to generate insights leveraging cloud-based platforms Leverage automation, cognitive and science-based techniques to manage data, predict scenarios and prescribe action Job Title: Data Scientist/Machine Learning Engg Job Summary:We are seeking a Data Scientist with experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and collaborate with a team to implement data-driven solutions. Key Responsibilities : Deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Work with a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications : 2-7 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field. Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300100 Show more Show less
Posted 1 week ago
6.0 - 10.0 years
0 Lacs
Greater Kolkata Area
On-site
Summary Position Summary Job Title: Senior Data Scientist/Team Lead Job Summary: We are seeking a Senior Data Scientist with hand-on experience in leveraging data, machine learning, statistics and AI technologies to generate insights and inform decision-making. You will work on large-scale data ecosystems and lead a team to implement data-driven solutions. Key Responsibilities: Lead and deliver large-scale DS/ML end to end projects across multiple industries and domains Liaison with on-site and client teams to understand various business problem statements, use cases and project requirements Lead a team of Data Engineers, ML/AI Engineers, DevOps, and other Data & AI professionals to deliver projects from inception to implementation Utilize maths/stats, AI, and cognitive techniques to analyze and process data, predict scenarios, and prescribe actions. Assist and participate in pre-sales, client pursuits and proposals Drive a human-led culture of Inclusion & Diversity by caring deeply for all team members Qualifications: 6-10 years of relevant hands-on experience in Data Science, Machine Learning, Statistical Modeling Bachelor’s or Master’s degree in a quantitative field Led a 3-5 member team on multiple end to end DS/ML projects Excellent communication and client/stakeholder management skills Must have strong hands-on experience with programming languages like Python, PySpark and SQL, and frameworks such as Numpy, Pandas, Scikit-learn, etc. Expertise in Classification, Regression, Time series, Decision Trees, Optimization, etc. Hands on knowledge of Docker containerization, GIT, Tableau or PowerBI Model deployment on Cloud or On-prem will be an added advantage Familiar with Databricks, Snowflake, or Hyperscalers (AWS/Azure/GCP/NVIDIA) Should follow research papers, comprehend and innovate/present the best approaches/solutions related to DS/ML AI/Cloud certification from a premier institute is preferred. Recruiting tips From developing a stand out resume to putting your best foot forward in the interview, we want you to feel prepared and confident as you explore opportunities at Deloitte. Check out recruiting tips from Deloitte recruiters. Benefits At Deloitte, we know that great people make a great organization. We value our people and offer employees a broad range of benefits. Learn more about what working at Deloitte can mean for you. Our people and culture Our inclusive culture empowers our people to be who they are, contribute their unique perspectives, and make a difference individually and collectively. It enables us to leverage different ideas and perspectives, and bring more creativity and innovation to help solve our clients' most complex challenges. This makes Deloitte one of the most rewarding places to work. Our purpose Deloitte’s purpose is to make an impact that matters for our people, clients, and communities. At Deloitte, purpose is synonymous with how we work every day. It defines who we are. Our purpose comes through in our work with clients that enables impact and value in their organizations, as well as through our own investments, commitments, and actions across areas that help drive positive outcomes for our communities. Professional development From entry-level employees to senior leaders, we believe there’s always room to learn. We offer opportunities to build new skills, take on leadership opportunities and connect and grow through mentorship. From on-the-job learning experiences to formal development programs, our professionals have a variety of opportunities to continue to grow throughout their career. Requisition code: 300022 Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Databricks is a popular technology in the field of big data and analytics, and the job market for Databricks professionals in India is growing rapidly. Companies across various industries are actively looking for skilled individuals with expertise in Databricks to help them harness the power of data. If you are considering a career in Databricks, here is a detailed guide to help you navigate the job market in India.
The average salary range for Databricks professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
In the field of Databricks, a typical career path may include: - Junior Developer - Senior Developer - Tech Lead - Architect
In addition to Databricks expertise, other skills that are often expected or helpful alongside Databricks include: - Apache Spark - Python/Scala programming - Data modeling - SQL - Data visualization tools
As you prepare for Databricks job interviews, make sure to brush up on your technical skills, stay updated with the latest trends in the field, and showcase your problem-solving abilities. With the right preparation and confidence, you can land your dream job in the exciting world of Databricks in India. Good luck!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
16951 Jobs | Dublin
Wipro
9154 Jobs | Bengaluru
EY
7414 Jobs | London
Amazon
5846 Jobs | Seattle,WA
Uplers
5736 Jobs | Ahmedabad
IBM
5617 Jobs | Armonk
Oracle
5448 Jobs | Redwood City
Accenture in India
5221 Jobs | Dublin 2
Capgemini
3420 Jobs | Paris,France
Tata Consultancy Services
3151 Jobs | Thane