Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
14 - 17 Lacs
Mysuru
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 week ago
7.0 - 12.0 years
15 - 25 Lacs
Kolkata, Kolkata-WB, Kolkata-WEST BENGAL
Work from Office
Inviting applications for the role of Lead Consultant-Data Scientists with AI and Generative Model experience! We are currently looking for a talented and experienced Data Scientist with a strong background in AI, specifically in building generative AI models using large language models, to join our team. This individual will play a crucial role in developing and implementing data-driven solutions, AI-powered applications, and generative models that will help us stay ahead of the competition and achieve our ambitious goals. Responsibilities • Collaborate with cross-functional teams to identify, analyze, and interpret complex datasets to develop actionable insights and drive data-driven decision-making. • Design, develop, and implement advanced statistical models, machine learning algorithms, AI applications, and generative models using large language models such as GPT-3, BERT and also frameworks like RAG, Knowledge Graphs etc. Communicate findings and insights to both technical and non-technical stakeholders through clear and concise presentations, reports, and visualizations. Continuously monitor and assess the performance of AI models, generative models, and data-driven solutions, refining and optimizing them as needed. Stay up-to-date with the latest industry trends, tools, and technologies in data science, AI, and generative models, and apply this knowledge to improve existing solutions and develop new ones. Mentor and guide junior team members, helping to develop their skills and contribute to their professional growth. Qualifications we seek in you: Minimum Qualifications • Bachelor's or Master's degree in Data Science, Computer Science, Statistics, or a related field. Experience in data science, machine learning, AI applications, and generative AI modelling. Strong expertise in Python, R, or other programming languages commonly used in data science and AI, with experience in implementing large language models and generative AI frameworks. Proficient in statistical modelling, machine learning techniques, AI algorithms, and generative model development using large language models such as GPT-3, BERT, or similar frameworks like RAG, Knowledge Graphs etc. Experience working with large datasets and using various data storage and processing technologies such as SQL, NoSQL, Hadoop, and Spark. Strong analytical, problem-solving, and critical thinking skills, with the ability to draw insights from complex data and develop actionable recommendations. Excellent communication and collaboration skills, with the ability to work effectively with cross-functional teams and explain complex concepts to non-technical stakeholders. Preferred Qualifications/ skills • Experience in deploying AI models, generative models, and applications in a production environment using cloud platforms such as AWS, Azure, or GCP. Knowledge of industry-specific data sources, challenges, and opportunities relevant to Insurance Demonstrated experience in leading data science projects from inception to completion, including project management and team collaboration skills.
Posted 2 weeks ago
5.0 - 10.0 years
14 - 17 Lacs
Pune
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 2 weeks ago
15.0 - 19.0 years
0 Lacs
karnataka
On-site
The SF Data Cloud Architect plays a critical role within Salesforce's Professional Services team, assisting in pre-sales and leading the design and implementation of enterprise-grade Data Management solutions. As the SF Data Cloud Architect, you will be responsible for architecting scalable solutions across enterprise landscapes using Data Cloud. Your role involves ensuring that data is ready for enterprise AI, applying data governance guardrails, supporting enterprise analytics, and automation. This position covers the ANZ, ASEAN, and India markets. The ideal candidate for this role should bring deep expertise in data architecture, project lifecycle, and Salesforce ecosystem knowledge. Additionally, possessing strong soft skills, stakeholder engagement capabilities, and technical writing ability is essential. You will collaborate with cross-functional teams to shape the future of the customer's data ecosystem and enable data excellence at scale. Key Responsibilities: - Serve as a Salesforce Data Cloud Trusted Advisor, supporting and leading project delivery and customer engagements during the pre-sales cycle. Provide insights on how Data Cloud contributes to the success of AI projects. - Offer Architecture Support by providing Data and System Architecture guidance to Salesforce Account teams and Customers. This includes reviewing proposed architectures and peer-reviewing project effort estimates, scope, and delivery considerations. - Lead Project Delivery by working on cross-cloud projects and spearheading Data Cloud Design & Delivery. Collaborate with cross-functional teams from Developers to Executives. - Design and guide the customer's enterprise data architecture aligned with their business goals. Emphasize the importance of Data Ethics and Privacy by ensuring that customer solutions adhere to relevant regulations and best practices in data security and privacy. - Lead Data Cloud architecture enablement for key domains and cross-cloud teams. - Collaborate with analytics and AI teams to ensure data readiness for advanced analytics, reporting, and AI/ML initiatives. - Engage with stakeholders across multiple Salesforce teams and projects to deliver aligned and trusted data solutions. Influence Executive Customer stakeholders while aligning technology strategy with business value and ROI. Build strong relationships with internal and external teams to contribute to broader goals and growth. - Create and maintain high-quality architecture blueprints, design documents, standards, and technical guidelines. Technical Skills: - Over 15 years of experience in data architecture or consulting with expertise in solution design and project delivery. - Deep knowledge in MDM, Data Distribution, and Data Modelling concepts. - Expertise in data modelling with a strong understanding of metadata and lineage. - Experience in executing data strategies, landscape architecture assessments, and proof-of-concepts. - Excellent communication, stakeholder management, and presentation skills. - Strong technical writing and documentation abilities. - Basic understanding of Hadoop spark fundamentals is an advantage. - Understanding of Data Platforms such as Snowflake, DataBricks, AWS, GCP, MS Azure. - Experience with tools like Salesforce Data Cloud or similar enterprise Data platforms. Hands-on deep Data Cloud experience is a strong plus. - Working knowledge of enterprise data warehouse, data lake, and data hub concepts. - Strong understanding of Salesforce Products and functional domains like Technology, Finance, Telco, Manufacturing, and Retail is beneficial. Expected Qualifications: - Salesforce Certified Data Cloud Consultant - Highly Preferred. - Salesforce Data Architect - Preferred. - Salesforce Application Architect - Preferred. - AWS Spark/ DL, Az DataBricks, Fabric, Google Cloud, Snowflakes, or similar - Preferred.,
Posted 2 weeks ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala, Python HBase, Hive Good to have Aws -S3, Athena, Dynamo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark Data Frames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 4 weeks ago
7.0 - 12.0 years
7 - 11 Lacs
Mumbai
Work from Office
company name=Apptad Technologies Pvt Ltd., industry=Employment Firms/Recruitment Services Firms, experience=7 to 13 , jd= Data Scientist Python and Notebooks Responsibilities We are seeking a talented Data Scientist to join our team and drive datadriven decisionmaking across our organization The ideal candidate will have a strong background in statistical analysis machine learning and data visualization with experience working with large datasets in a Teradata environment Responsibilities Design and implement endtoend data science projects from problem definition to model deployment Develop and apply advanced machine learning algorithms and statistical models to solve complex business problems Collaborate with crossfunctional teams to identify opportunities for datadriven improvements Conduct exploratory data analysis and feature engineering to prepare data for modeling Create and maintain dashboards and reports to communicate insights to stakeholders Optimize data collection procedures and ensure data quality Stay current with the latest advancements in data science and machine learning techniques Implement and maintain onpremise AIML solutions Apply Explainable AI techniques to enhance model interpretability and transparency 1013 years of experience in data science or related field Strong proficiency in Python R and SQL Experience with Teradata and data lakelakehouse architectures Expertise in machine learning algorithms statistical modeling and data visualization Familiarity with big data technologies eg Hadoop Spark Excellent problemsolving and communication skills Experience with version control systems eg Git Experience with onpremise AIML solutions Knowledge of Explainable AI methods and their practical applications , Title=Data Scientist, ref=6566343
Posted 1 month ago
5.0 - 10.0 years
14 - 17 Lacs
Pune
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
5.0 - 10.0 years
14 - 17 Lacs
Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
5.0 - 9.0 years
6 - 9 Lacs
Bengaluru
Work from Office
Looking for senior pyspark developer with 6+ years of hands on experienceBuild and manage large scale data solutions using tools like Pyspark, Hadoop, Hive, Python & SQLCreate workflows to process data using IBM TWSAble to use pyspark to create different reports and handle large datasetsUse HQL/SQL/Hive for ad-hoc query data and generate reports, and store data in HDFS Able to deploy code using Bitbucket, Pycharm and Teamcity.Can manage folks, able to communicate with several teams and can explain problem/solutions to business team in non-tech manner -Primary Skill Pyspark-Hadoop-Spark - One to Three Years,Developer / Software Engineer
Posted 1 month ago
5.0 - 10.0 years
14 - 17 Lacs
Navi Mumbai
Work from Office
As a Big Data Engineer, you wi deveop, maintain, evauate, and test big data soutions. You wi be invoved in data engineering activities ike creating pipeines/workfows for Source to Target and impementing soutions that tacke the cients needs. Your primary responsibiities incude: Design, buid, optimize and support new and existing data modes and ETL processes based on our cients business requirements. Buid, depoy and manage data infrastructure that can adequatey hande the needs of a rapidy growing data driven organization. Coordinate data access and security to enabe data scientists and anaysts to easiy access to data whenever they need too Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scaa ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Deveoped Python and pyspark programs for data anaysis. Good working experience with python to deveop Custom Framework for generating of rues (just ike rues engine). Deveoped Python code to gather the data from HBase and designs the soution to impement using Pyspark. Apache Spark DataFrames/RDD's were used to appy business transformations and utiized Hive Context objects to perform read/write operations Preferred technica and professiona experience Understanding of Devops. Experience in buiding scaabe end-to-end data ingestion and processing soutions Experience with object-oriented and/or functiona programming anguages, such as Python, Java and Scaa
Posted 1 month ago
5.0 - 10.0 years
14 - 17 Lacs
Mumbai
Work from Office
As a Big Data Engineer, you wi deveop, maintain, evauate, and test big data soutions. You wi be invoved in data engineering activities ike creating pipeines/workfows for Source to Target and impementing soutions that tacke the cients needs. Your primary responsibiities incude: Design, buid, optimize and support new and existing data modes and ETL processes based on our cients business requirements. Buid, depoy and manage data infrastructure that can adequatey hande the needs of a rapidy growing data driven organization. Coordinate data access and security to enabe data scientists and anaysts to easiy access to data whenever they need too. Required education Bacheor's Degree Preferred education Master's Degree Required technica and professiona expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scaa ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Deveoped Python and pyspark programs for data anaysis. Good working experience with python to deveop Custom Framework for generating of rues (just ike rues engine). Deveoped Python code to gather the data from HBase and designs the soution to impement using Pyspark. Apache Spark DataFrames/RDD's were used to appy business transformations and utiized Hive Context objects to perform read/write operations. Preferred technica and professiona experience Understanding of Devops. Experience in buiding scaabe end-to-end data ingestion and processing soutions Experience with object-oriented and/or functiona programming anguages, such as Python, Java and Scaa
Posted 1 month ago
7.0 - 12.0 years
6 - 16 Lacs
Noida, Gurugram, Delhi / NCR
Work from Office
Role & responsibilities Total Experience: 7 yrs Relevant experience: 6 yrs Mandatory skills: Hadoop Apache Spark, Hive AWS cloud services including S3, Redshift, EMR etc SQL Nice to have: Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow / Control-M. Practical experience in Core Java /Python/Scala. Experience with SDLC tools (e.g. Bamboo, JIRA, GIT, Confluence, Bitbucket) Having experience in Data Modelling, Data Quality, Load Assurance. Ability to communicate problems and solutions effectively with both business and technical stakeholders (written and verbal) Added advantages : Scheduling tools - control-m, airflow Monitoring tools : Sumologic, SPLUNK Process familiarity of Incident Management Job Description: Experience in Big Data Technology like Hadoop, Apache Spark, Hive. Having experience in AWS cloud services including S3, Redshift, EMR etc. Strong expertise in RDBMS and SQL. Good experience in Linux and shell scripting. Experience in Data Pipeline using Apache Airflow / Control-M. Practical experience in Core Java /Python/Scala. Experience with SDLC tools (e.g. Bamboo, JIRA, GIT, Confluence, Bitbucket) Having experience in Data Modelling, Data Quality, Load Assurance. Ability to communicate problems and solutions effectively with both business and technical stakeholders (written and verbal) Added advantages : Scheduling tools - control-m, airflow Monitoring tools : Sumologic, SPLUNK Process familiarity of Incident Management
Posted 1 month ago
10.0 - 18.0 years
30 - 40 Lacs
Bengaluru, Delhi / NCR, Mumbai (All Areas)
Hybrid
Role & responsibilities Job Description: We are seeking a talented and experienced BI Manager to join our dynamic team. The ideal candidate will have a strong background in BI tools, along with proficiency in SQL and ETL processes or tools Responsibilities: Work in an IC role or lead and mentor a team of BI developers, ensuring adherence to best practices in BI development and data visualization. Collaborate with stakeholders to define business requirements and design impactful dashboards using Power BI. Use Python to automate data extraction and transformation. Oversee ETL processes and integrate data from various sources into Power BI. Enhance Power BI reports with Python scripts for advanced analytics and custom calculations. Ensure strong data governance, quality, and user-friendly visualization practices. Communicate complex data concepts clearly to both technical and non-technical teams. Good documentation skills to deliver BRDs, architecture, specifications document, project plan etc. Lead the creation of RFPs for BI tools and solutions, working with vendors to select the best technology. Manage POV exercises to demonstrate the value of BI solutions and secure stakeholder buy-in. Develop MVPs to quickly deliver core BI functionalities, allowing for iterative feedback and faster delivery. Qualifications: 9+ years of experience in DWH/BI domain with a focus on Power BI development. Proficiency in at least two BI tools: Qlik, Power BI, Tableau. Showcase experience in 2-3 complete life cycle implementation of BI projects. Willingness and readiness to cross-skill in multiple BI tools and platforms. Experience with Power Platform (Power Apps, Power Automate), including building apps and automating workflows. Advanced SQL skills for querying and manipulating data from relational databases. Basic Python skills for ETL automation, with experience in Pandas, NumPy, and pyodbc. Proven experience leading BI teams and managing priorities. Experience in preparing RFPs, leading POVs, and delivering MVPs for BI solutions. Strong analytical and problem-solving skills, with attention to detail. Excellent communication and presentation skills. Good-to-Have: Knowledge of other BI tools and platforms. Experience working with cloud environments (e.g., AWS, Azure, GCP, Snowflake). Knowledge of Alteryx, Informatica or other ETL tools . Knowledge in embedding BI visualizations and reports into web applications. Exposure to big data tools like Hadoop or Spark. Familiarity with AI-powered assistant tools (e.g., GenAI copilot).
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Pune
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Navi Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Navi Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
5.0 - 10.0 years
7 - 12 Lacs
Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations. Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
5.0 - 10.0 years
14 - 17 Lacs
Navi Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
0.0 - 5.0 years
0 - 5 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As a Big Data Engineer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Big Data Developer, Hadoop, Hive, Spark, PySpark, Strong SQL. Ability to incorporate a variety of statistical and machine learning techniques. Basic understanding of Cloud (AWS,Azure, etc). Ability to use programming languages like Java, Python, Scala, etc., to build pipelines to extract and transform data from a repository to a data consumer Ability to use Extract, Transform, and Load (ETL) tools and/or data integration, or federation tools to prepare and transform data as needed. Ability to use leading edge tools such as Linux, SQL, Python, Spark, Hadoop and Java Preferred technical and professional experience Basic understanding or experience with predictive/prescriptive modeling skills You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions
Posted 1 month ago
10.0 - 15.0 years
25 - 32 Lacs
Hyderabad
Hybrid
Hello, Urgent job openings for Data Science Manager role @ GlobalData(Hyd). Job Description given below please go through to understand the requirement. if requirement is matching to your profile & interested to apply please share your updated resume @ mail id (m.salim@globaldata.com). Mention Subject Line :- Applying for Data Science Manager @ GlobalData(Hyd) Share your details in the mail :- Full Name : Mobile # : Qualification : Company Name : Designation : Total Work Experience Years : Current CTC : Expected CTC : Notice Period : Current Location/willing to relocate to Hyd? : Office Address : 3rd Floor, Jyoti Pinnacle Building, Opp to Prestige IVY League Appt, Kondapur Road, Hyderabad, Telangana-500081. Job Description :- About the Role We are seeking an experienced Data Science Manager to lead our data science team in transforming complex data into actionable business insights. The ideal candidate will combine strong technical expertise with leadership skills to guide a team of data scientists while collaborating with cross-functional stakeholders to drive data-informed decision making throughout the organization. Key Responsibilities Lead and mentor a team of 5-8 data scientists, providing technical guidance and career development Develop and implement the organization's data science strategy and roadmap in alignment with business objectives Oversee the full lifecycle of data science projects from conception to deployment Establish best practices, methodologies and standards for data science processes Drive the development of machine learning models, predictive analytics, and other advanced analytical solutions Collaborate with business leaders to identify opportunities for data-driven improvements Translate complex technical concepts into clear insights for non-technical stakeholders Ensure data quality, integrity, and proper governance in all data science initiatives Stay current with emerging technologies and methodologies in data science and AI Manage project timelines, resources, and budgets to ensure timely delivery of solutions Qualifications Master's degree or PhD in Computer Science, Statistics, Mathematics, or related quantitative field 7+ years of experience in data science or related analytical roles 5+ years of experience managing data science teams and projects Strong programming skills in Python, R, or similar languages Expertise in machine learning algorithms, statistical analysis, and data mining techniques Experience with big data technologies (Hadoop, Spark) and cloud platforms (AWS, Azure, GCP) Proven track record of delivering impactful data science solutions that drive business value Excellent communication and presentation skills with ability to translate technical concepts for business audiences Strong project management and organizational abilities Business acumen with understanding of how data science can drive organizational success Preferred Qualifications Experience in our industry sector Knowledge of data visualization tools (Tableau, Power BI) Familiarity with MLOps and model deployment Experience implementing NLP or computer vision solutions Agile/Scrum certification or experience Thanks & Regards, Salim (Human Resources)
Posted 1 month ago
5.0 - 7.0 years
5 - 7 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
5.0 - 7.0 years
5 - 7 Lacs
Bengaluru / Bangalore, Karnataka, India
On-site
Your role and responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
5.0 - 7.0 years
5 - 7 Lacs
Mysore, Karnataka, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
5.0 - 7.0 years
5 - 7 Lacs
Pune, Maharashtra, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your role and responsibilities As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis.. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough