Home
Jobs
Companies
Resume

23 Pig Jobs

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

4.0 - 8.0 years

10 - 20 Lacs

Pune, Bengaluru

Work from Office

Naukri logo

We are looking for skilled Hadoop and Google Cloud Platform (GCP) Engineers to join our dynamic team. If you have hands-on experience with Big Data technologies and cloud ecosystems, we want to hear from you! Key Skills: Hadoop Ecosystem (HDFS, MapReduce, YARN, Hive, Spark) Google Cloud Platform (BigQuery, DataProc, Cloud Composer) Data Ingestion & ETL pipelines Strong programming skills (Java, Python, Scala) Experience with real-time data processing (Kafka, Spark Streaming) Why Join Us? Work on cutting-edge Big Data projects Collaborate with a passionate and innovative team Opportunities for growth and learning Interested candidates, please share your updated resume or connect with us directly!

Posted 6 days ago

Apply

2.0 - 3.0 years

4 - 5 Lacs

Hyderabad

Work from Office

Naukri logo

Duration: 12Months Job Type: Contract Work Type: Onsite Job Description : Analyzes business requirements/processes and system integration points to determine appropriate technology solutions. Designs, codes, tests and documents applications based on system and user requirements. Requirements: 2-4 years of relevant IT experience in Data-Warehousing Technologies with excellent communication and Analytical skills Should possess the below skillset experience Informatica 9 or above as an ETL Tool Teradata/Oracle/SQL Server as Warehouse Database Very strong in SQL / Macros Should know Basic ~ Medium UNIX Commands Knowledge on Hadoop- HDFS, Hive, PIG and YARN Knowledge on ingestion tool - Stream sets Good to have knowledge on Spark and Kafka Exposure in scheduling tools like Control-M Excellent analytical and problem-solving skills is a must have Excellent communication skills (oral and written) Must be experienced in diverse industry and tools and data warehousing technologies. Responsibilities: Prepares flow charts and systems diagrams to assist in problem analysis. Responsible for preparing design documentation. Designs, codes, tests and debugs software according to the clients standards, policies and procedures. Codes, tests and documents programs according to system standards. Prepares test data for unit, string and parallel testing. Analyzes business needs and creates software solutions. Evaluates and recommends software and hardware solutions to meet user needs. Interacts with business users and I/T to define current and future application requirements. Executes schedules, costs and documentation to ensure project comes to successful conclusion. Initiates corrective action to stay on project schedules. May assist in orienting, training, assigning and checking the work of lower-level employees. Leads small to moderate budget projects. Knowledge and Skills: Possesses and applies a broad knowledge of application programming processes and procedures to the completion of complex assignments. Competent to analyze diverse and complex problems. Possesses and applies broad knowledge of principles of applications programming. Competent to work in most phases of applications programming. Beginning to lead small projects or starting to offer programming solutions at an advanced level. Knowledge includes advanced work on standard applications programs including coding, testing and debugging. Advanced ability to effectively troubleshoot program errors. Advanced understanding of how technology decisions relate to business needs. Mandatory Skills: Informatica 9 or above as an ETL Tool Teradata/Oracle/SQL Server as Warehouse Database Very strong in SQL / Macros Should have good knowledge on UNIX Commands Experience: Total Exp 2-3 Years Rel Exp 2 years

Posted 3 weeks ago

Apply

5 - 10 years

15 - 30 Lacs

Noida, Gurugram, Delhi / NCR

Hybrid

Naukri logo

Skills: Mandatory: SQL, Python, Databricks, Spark / Pyspark. Good to have: MongoDB, Dataiku DSS, Databricks Exp in data processing using Python/scala Advanced working SQL knowledge, expertise using relational databases Need Early joiners. Required Candidate profile ETL development tools like databricks/airflow/snowflake. Expert in building and optimizing big data' data pipelines, architectures, and data sets. Proficient in Big data tools and ecosystem

Posted 1 month ago

Apply

1 - 4 years

3 - 7 Lacs

Pune

Work from Office

Naukri logo

What are we looking for? Searce is looking for a Data Engineer who is able to work with business leads, analysts, data scientists and fellow engineers to build data products that empower better decision making. Someone who is passionate about the data quality of our business metrics and geared up to provide flexible solutions that can be scaled up to respond to broader business questions. What you'll do as a Data Engineer with us: Understand the business requirements and translate these to data services to solve the business and data problems Develop and manage the transports/data pipelines (ETL/ELT jobs) and retrieve applicable datasets for specific use cases using cloud data platforms and tools Explore new technologies and tools to design complex data modeling scenarios, transformations and provide optimal data engineering solutions Build data integration layers to connect with different heterogeneous sources using various approaches Understand data and metadata to support consistency of information retrieval, combination, analysis and reporting Troubleshoot and monitor data pipelines to have high availability of reporting layer Collaborate with many teams - engineering and business, to build scalable and optimized data solutions Spend significant time on enhancing the technical excellence via certifications, contribution to blog, etc What are the must-haves to join us? Is Education overrated? Yes. We believe so. But there is no way to locate you otherwise. So we might look for at least a Bachelors degree in Computer Science or in Engineering/Technology. 1-4 years of experience with building the data pipelines or data ingestion for both Batch/Streaming data from different sources to database, data warehouse / data lake Hands-on knowledge/experience with SQL/Python/Java/Scala programming,Experience with any cloud computing platforms like AWS (S3, Lambda functions, RedShift, Athena), GCP (GCS, CloudSQL, Spanner, Dataplex, BigLake, Dataflow, Dataproc, Pub/Sub, Bigquery), etc. Experience/knowledge with Big Data tools (Hadoop, Hive, Sqoop, Pig, Spark, Presto) Data pipelines & orchestration tool (Ozzie, Airflow, Nifi) Any streaming engines (Kafka, Storm, Spark Streaming, Pub/Sub) Any relational database or data warehousing experience Any ETL tool experience (Informatica, Talend, Pentaho, Business Objects Data Services, Hevo) or good to have knowledge Good communication skills, right Attitude, open-minded, flexible to learn and adapt to the fast growing data culture, proactive in coordination with other teams and providing quick data solutions Experience in working independently and strong analytical skills

Posted 2 months ago

Apply

6 - 11 years

15 - 30 Lacs

Pune

Work from Office

Naukri logo

Develop models for underwriting, claims management, fraud detection, risk assessment. Use advanced ML techniques to analyze & enhance retention strategies. Process & analyze large datasets from various insurance systems to identify patterns. Required Candidate profile Proficient in Python & SQL for data analysis & modeling. Familiar with data science libraries like Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch. Experience with data visualization tools.

Posted 2 months ago

Apply

5 - 9 years

15 - 25 Lacs

Thane

Work from Office

Naukri logo

Responsibilities: Data Scientist responsibilities include Identity, developing and implementing the appropriate statistical techniques, algorithms and data mining analysis to create new, scalable solutions that address business challenges across the organization, as well as provide actionable insights with a clear impact on ROI. Define and develop, maintain and evolve data models, tools, and capabilities. Communicate your findings to the appropriate teams through visualizations. Collaborate and communicate findings to diverse stakeholders. Provide solutions but not limited to: Customer Segmentation & Targeting, Propensity Modeling, Churn Modeling, Lifetime Value Estimation, Forecasting, Recommender Systems, Modeling Response to Incentives, Marketing Mix Optimization, Price Optimization, natural language processing, Object detection/Image recognition. Create interactive tools using cutting-edge visualization techniques (beyond standard visualization like Tableau, Spotfire Qlikview, etc. ) Requirements: Bachelors/ Masters/Ph. D. degree in Math, Computer Science, Information Systems, Machine Learning, Statistics, Econometrics, Applied Mathematics, Operations Research, or related technical degree he ability to break complex business problems. 5 years- of experience in a related position, as a data scientist or business analyst building predictive analytics solutions for various types of business problems Advanced knowledge of statistical techniques, machine learning algorithms, data mining, and text mining. Programming background and expertise in building models using at least one of the following languages: SAS, Python, R. Exposure to Big Data platforms like Hadoop and its eco-system (Hive, Pig, Sqoop, Mahout

Posted 2 months ago

Apply

3 - 8 years

15 - 25 Lacs

Meerut

Work from Office

Naukri logo

Responsibilities: Data Scientist responsibilities include Identity, developing and implementing the appropriate statistical techniques, algorithms and data mining analysis to create new, scalable solutions that address business challenges across the organization, as well as provide actionable insights with a clear impact on ROI. Define and develop, maintain and evolve data models, tools, and capabilities. Communicate your findings to the appropriate teams through visualizations. Collaborate and communicate findings to diverse stakeholders. Provide solutions but not limited to: Customer Segmentation & Targeting, Propensity Modeling, Churn Modeling, Lifetime Value Estimation, Forecasting, Recommender Systems, Modeling Response to Incentives, Marketing Mix Optimization, Price Optimization, natural language processing, Object detection/Image recognition. Create interactive tools using cutting-edge visualization techniques (beyond standard visualization like Tableau, Spotfire Qlikview, etc. ) Requirements: Bachelors/ Masters/Ph. D. degree in Math, Computer Science, Information Systems, Machine Learning, Statistics, Econometrics, Applied Mathematics, Operations Research, or related technical degree he ability to break complex business problems. 5 years- of experience in a related position, as a data scientist or business analyst building predictive analytics solutions for various types of business problems Advanced knowledge of statistical techniques, machine learning algorithms, data mining, and text mining. Programming background and expertise in building models using at least one of the following languages: SAS, Python, R. Exposure to Big Data platforms like Hadoop and its eco-system (Hive, Pig, Sqoop, Mahout

Posted 2 months ago

Apply

1 - 2 years

3 - 4 Lacs

Mumbai

Work from Office

Naukri logo

about the role Data Engineer identifies the business problem and translates these to data services and engineering outcomes. You will deliver data solutions that empower better decision making and flexibility of your solution that scales to respond to broader business questions. key responsibilities As a Data Engineer, you are a full-stack data engineer that loves solving business problems. You work with business leads, analysts and data scientists to understand the business domain and engage with fellow engineers to build data products that empower better decision making. You are passionate about data quality of our business metrics and flexibility of your solution that scales to respond to broader business questions. If you love to solve problems using your skills, then come join the Team Searce. We have a casual and fun office environment that actively steers clear of rigid "corporate" culture, focuses on productivity and creativity, and allows you to be part of a world-class team while still being yourself. Consistently strive to acquire new skills on Cloud, DevOps, Big Data, AI and ML Understand the business problem and translate these to data services and engineering outcomes Explore new technologies and learn new techniques to solve business problems creatively Think big! and drive the strategy for better data quality for the customers Collaborate with many teams - engineering and business, to build better data products preferred qualifications Over 1-2 years of experience with Hands-on experience of any one programming language (Python, Java, Scala) Understanding of SQL is must Big data (Hadoop, Hive, Yarn, Sqoop) MPP platforms (Spark, Pig, Presto) Data-pipeline & schedular tool (Ozzie, Airflow, Nifi) Streaming engines (Kafka, Storm, Spark Streaming) Any Relational database or DW experience Any ETL tool experience Hands-on experience in pipeline design, ETL and application development Good communication skills Experience in working independently and strong analytical skills Dependable and good team player Desire to learn and work with new technologies Automation in your blood

Posted 2 months ago

Apply

5 - 10 years

8 - 14 Lacs

Kota

Work from Office

Naukri logo

- PhD or MS in Computer Science, Computational Linguistics, Artificial Intelligence with a heavy focus on NLP/Text mining with 5 years of relevant industry experience. - Creativity, resourcefulness, and a collaborative spirit. - Knowledge and working experience in one or more of the following areas: Natural Language Processing, Clustering and Classifications of Text, Question Answering, Text Mining, Information Retrieval, Distributional Semantics, Knowledge Engineering, Search Rank and Recommendation. - Deep experience with text-wrangling and pre-processing skills such as document parsing and cleanup, vectorization, tokenization, language modeling, phrase detection, etc. - Proficient programming skills in a high-level language (e.g. Python, R, Java, Scala) - Being comfortable with rapid prototyping practices. - Being comfortable with developing clean, production-ready code. - Being comfortable with pre-processing unstructured or semi-structured data. - Experience with statistical data analysis, experimental design, and hypothesis validation. Project-based experience with some of the following tools: - Natural Language Processing (e.g. Spacy, NLTK, OpenNLP or similar) - Applied Machine Learning (e.g. Scikit-learn, SparkML, H2O or similar) - Information retrieval and search engines (e.g. Elasticsearch/ELK, Solr/Lucene) - Distributed computing platforms, such as Spark, Hadoop (Hive, Hbase, Pig), GraphLab - Databases ( traditional and NOSQL) - Proficiency in traditional Machine Learning models such as LDA/topic modeling, graphical models, etc. -Familiarity with Deep Learning architectures and frameworks such as Pytorch, Tensorflow, Keras.

Posted 2 months ago

Apply

5 - 10 years

8 - 14 Lacs

Kolkata

Work from Office

Naukri logo

- PhD or MS in Computer Science, Computational Linguistics, Artificial Intelligence with a heavy focus on NLP/Text mining with 5 years of relevant industry experience. - Creativity, resourcefulness, and a collaborative spirit. - Knowledge and working experience in one or more of the following areas: Natural Language Processing, Clustering and Classifications of Text, Question Answering, Text Mining, Information Retrieval, Distributional Semantics, Knowledge Engineering, Search Rank and Recommendation. - Deep experience with text-wrangling and pre-processing skills such as document parsing and cleanup, vectorization, tokenization, language modeling, phrase detection, etc. - Proficient programming skills in a high-level language (e.g. Python, R, Java, Scala) - Being comfortable with rapid prototyping practices. - Being comfortable with developing clean, production-ready code. - Being comfortable with pre-processing unstructured or semi-structured data. - Experience with statistical data analysis, experimental design, and hypothesis validation. Project-based experience with some of the following tools: - Natural Language Processing (e.g. Spacy, NLTK, OpenNLP or similar) - Applied Machine Learning (e.g. Scikit-learn, SparkML, H2O or similar) - Information retrieval and search engines (e.g. Elasticsearch/ELK, Solr/Lucene) - Distributed computing platforms, such as Spark, Hadoop (Hive, Hbase, Pig), GraphLab - Databases ( traditional and NOSQL) - Proficiency in traditional Machine Learning models such as LDA/topic modeling, graphical models, etc. -Familiarity with Deep Learning architectures and frameworks such as Pytorch, Tensorflow, Keras.

Posted 2 months ago

Apply

3 - 8 years

15 - 25 Lacs

Hyderabad

Work from Office

Naukri logo

Responsibilities: Data Scientist responsibilities include Identity, developing and implementing the appropriate statistical techniques, algorithms and data mining analysis to create new, scalable solutions that address business challenges across the organization, as well as provide actionable insights with a clear impact on ROI. Define and develop, maintain and evolve data models, tools, and capabilities. Communicate your findings to the appropriate teams through visualizations. Collaborate and communicate findings to diverse stakeholders. Provide solutions but not limited to: Customer Segmentation & Targeting, Propensity Modeling, Churn Modeling, Lifetime Value Estimation, Forecasting, Recommender Systems, Modeling Response to Incentives, Marketing Mix Optimization, Price Optimization, natural language processing, Object detection/Image recognition. Create interactive tools using cutting-edge visualization techniques (beyond standard visualization like Tableau, Spotfire Qlikview, etc. ) Requirements: Bachelors/ Masters/Ph. D. degree in Math, Computer Science, Information Systems, Machine Learning, Statistics, Econometrics, Applied Mathematics, Operations Research, or related technical degree he ability to break complex business problems. 5 years- of experience in a related position, as a data scientist or business analyst building predictive analytics solutions for various types of business problems Advanced knowledge of statistical techniques, machine learning algorithms, data mining, and text mining. Programming background and expertise in building models using at least one of the following languages: SAS, Python, R. Exposure to Big Data platforms like Hadoop and its eco-system (Hive, Pig, Sqoop, Mahout

Posted 2 months ago

Apply

3 - 8 years

15 - 25 Lacs

Chennai

Work from Office

Naukri logo

Responsibilities: Data Scientist responsibilities include Identity, developing and implementing the appropriate statistical techniques, algorithms and data mining analysis to create new, scalable solutions that address business challenges across the organization, as well as provide actionable insights with a clear impact on ROI. Define and develop, maintain and evolve data models, tools, and capabilities. Communicate your findings to the appropriate teams through visualizations. Collaborate and communicate findings to diverse stakeholders. Provide solutions but not limited to: Customer Segmentation & Targeting, Propensity Modeling, Churn Modeling, Lifetime Value Estimation, Forecasting, Recommender Systems, Modeling Response to Incentives, Marketing Mix Optimization, Price Optimization, natural language processing, Object detection/Image recognition. Create interactive tools using cutting-edge visualization techniques (beyond standard visualization like Tableau, Spotfire Qlikview, etc. ) Requirements: Bachelors/ Masters/Ph. D. degree in Math, Computer Science, Information Systems, Machine Learning, Statistics, Econometrics, Applied Mathematics, Operations Research, or related technical degree he ability to break complex business problems. 5 years- of experience in a related position, as a data scientist or business analyst building predictive analytics solutions for various types of business problems Advanced knowledge of statistical techniques, machine learning algorithms, data mining, and text mining. Programming background and expertise in building models using at least one of the following languages: SAS, Python, R. Exposure to Big Data platforms like Hadoop and its eco-system (Hive, Pig, Sqoop, Mahout

Posted 2 months ago

Apply

3 - 8 years

15 - 25 Lacs

Bengaluru

Work from Office

Naukri logo

Responsibilities: Data Scientist responsibilities include Identity, developing and implementing the appropriate statistical techniques, algorithms and data mining analysis to create new, scalable solutions that address business challenges across the organization, as well as provide actionable insights with a clear impact on ROI. Define and develop, maintain and evolve data models, tools, and capabilities. Communicate your findings to the appropriate teams through visualizations. Collaborate and communicate findings to diverse stakeholders. Provide solutions but not limited to: Customer Segmentation & Targeting, Propensity Modeling, Churn Modeling, Lifetime Value Estimation, Forecasting, Recommender Systems, Modeling Response to Incentives, Marketing Mix Optimization, Price Optimization, natural language processing, Object detection/Image recognition. Create interactive tools using cutting-edge visualization techniques (beyond standard visualization like Tableau, Spotfire Qlikview, etc. ) Requirements: Bachelors/ Masters/Ph. D. degree in Math, Computer Science, Information Systems, Machine Learning, Statistics, Econometrics, Applied Mathematics, Operations Research, or related technical degree he ability to break complex business problems. 5 years- of experience in a related position, as a data scientist or business analyst building predictive analytics solutions for various types of business problems Advanced knowledge of statistical techniques, machine learning algorithms, data mining, and text mining. Programming background and expertise in building models using at least one of the following languages: SAS, Python, R. Exposure to Big Data platforms like Hadoop and its eco-system (Hive, Pig, Sqoop, Mahout

Posted 2 months ago

Apply

7 - 10 years

9 - 12 Lacs

Mumbai

Work from Office

Naukri logo

Position Overview Synechron is seeking a skilled and experienced ETL Developer to join our team. The ideal candidate will have a strong proficiency in ETL tools, a deep understanding of big data technologies, and expertise in cloud data warehousing solutions. You will play a critical role in designing, developing, and maintaining ETL processes to ensure data integration and transformation for our high-profile clients. Software Requirements Proficiency in ETL tools such as Informatica. Strong experience with Hadoop ecosystem (HDFS, MapReduce, Hive, Pig). Expertise in cloud data warehousing solutions, specifically Snowflake. Knowledge of SQL and data modeling. Familiarity with data integration and transformation techniques. Understanding of data governance and data quality principles. Overall Responsibilities Design, develop, and maintain ETL processes to extract, transform, and load data from various sources into Snowflake and other data warehouses. Collaborate with data architects and analysts to understand data requirements and ensure data is delivered accurately and timely. Optimize ETL processes for performance and scalability. Implement data quality checks and data governance policies. Troubleshoot and resolve data issues and discrepancies. Document ETL processes and maintain technical documentation. Stay updated with industry trends and best practices in data engineering. Technical Skills ETL Tools Informatica PowerCenter, Informatica Cloud Big Data Technologies Hadoop (HDFS, MapReduce, Hive, Pig) Cloud Platforms Snowflake Database Management SQL, NoSQL databases Scripting Languages Python, Shell scripting Data Modeling Star Schema, Snowflake Schema Nice-to-Have Experience with data visualization tools (e.g., Tableau, Power BI) Familiarity with Apache Spark Knowledge of data lakes and data mesh architecture Experience Total Experience: 7-10 years Relevant Experience: Minimum of 5-9 years in Data ETL development or similar roles. Proven experience with Hadoop and data warehousing solutions, particularly Snowflake. Experience in working with large datasets and performance tuning of ETL processes. Day-to-Day Activities Design and implement ETL workflows based on business requirements. Monitor ETL jobs and troubleshoot any issues that arise. Collaborate with data analysts to provide data insights. Conduct code reviews and ensure best practices are followed. Participate in agile ceremonies (sprint planning, stand-ups, retrospectives). Provide support during data migration and integration projects. Qualifications Bachelors degree in Computer Science, Information Technology, or a related field. Relevant certifications in ETL tools or data engineering (e.g., Informatica, Snowflake certifications) are a plus. Soft Skills Strong analytical and problem-solving skills. Effective communication and collaboration skills. Attention to detail and commitment to delivering high-quality results. Ability to work independently and in a team-oriented environment. Flexibility to adapt to changing priorities and technologies. S YNECHRONS DIVERSITY & INCLUSION STATEMENT Diversity & Inclusion are fundamental to our culture, and Synechron is proud to be an equal opportunity workplace and is an affirmative action employer. Our Diversity, Equity, and Inclusion (DEI) initiative Same Difference is committed to fostering an inclusive culture promoting equality, diversity and an environment that is respectful to all. We strongly believe that a diverse workforce helps build stronger, successful businesses as a global company. We encourage applicants from across diverse backgrounds, race, ethnicities, religion, age, marital status, gender, sexual orientations, or disabilities to apply. We empower our global workforce by offering flexible workplace arrangements, mentoring, internal mobility, learning and development programs, and more. All employment decisions at Synechron are based on business needs, job requirements and individual qualifications, without regard to the applicants gender, gender identity, sexual orientation, race, ethnicity, disabled or veteran status, or any other characteristic protected by law .

Posted 3 months ago

Apply

5 - 7 years

11 - 21 Lacs

Mumbai

Work from Office

Naukri logo

Looking for immediate joiner - Mumbai (work from office) Roles and Responsibilities: Programming Skills knowledge of statistical programming languages like R, Python, and database query languages like SQL, Hive, Pig is desirable. Familiarity with Scala, Java, or C++ is an added advantage. Data mining or extracting usable data from valuable data sources Carrying out the preprocessing of structured and unstructured data Enhancing data collection procedures to include all relevant information for developing analytic systems Processing, cleansing, and validating the integrity of data to be used for analysis Analyzing large amounts of information to find patterns and solutions Data Wrangling proficiency in handling imperfections in data is an important aspect of a data scientist job description.

Posted 3 months ago

Apply

5 - 9 years

10 - 14 Lacs

Mumbai

Work from Office

Naukri logo

REQUI RE MENT Excellent analytical and problem-solving skills, the ability to understand complex problems, and the ability to generate appropriate technical solutions. Knowledge of AI and ML technology frameworks and solutions is preferred. Knowledge of different data security standards (e.g. ISO27K) and regulatory requirements (e.g. PCI-DSS, GDPR, etc.) Excellent skills in NoSql and Oracle, preferably in a UNIX/Linux environment. Knowledge of database modeling and optimization techniques. Must be able to map business requirements to database models/ERD. At least 3 years of experience in handling database server security implementation, high-availability solutions, backup & recovery, performance tuning & monitoring, capacity planning, mirroring, and clustering. Maintain database development guidelines and standards. Knowledge of Reporting/Business Intelligence databases and tools. Data visualization, data migration, and data modeling. DBMS software, including SQL Server. Database and cloud computing design, architectures, and data lakes. Working knowledge of Hadoop technologies like Pig, Hive, and MapReduce. Applied mathematics and statistics. Expert knowledge of Oracle Database and working knowledge of Microsoft SQL Server, MySQL, PostgreSQL, and IBM Db2. RESPON SIBILITIES Collaborate with systems architects, security architects, business owners, and business analysts to understand business requirements. Develop and document database architectural strategies at the modeling, design, and implementation stages to address business requirements. Design and document databases to support business applications, ensuring system scalability, security, efficiency, compliance, performance, and reliability. Design innovative data services solutions, using SQL & NOSQL, be able to do text analytics, and be capable of doing real-time analysis of big data while complying with all applicable data security and privacy requirements, such as GDP. Should be able to lead engagements with OEMs of SQL & NoSQL providers and provide expert knowledge and troubleshooting skills for incident resolution. QUALI FICATION Bachelors degree in Computer Science, Information Technology, or related field. Relevant certifications in database administration or related areas. OTHER ATTRIBUTES Strong problem-solving skills and attention to detail. Excellent communication and interpersonal skills. Ability to work independently and in a team environment. Willingness to learn and adapt to new technologies. LOCAT ION Mumbai

Posted 3 months ago

Apply

10 - 15 years

12 - 17 Lacs

Hyderabad

Work from Office

Naukri logo

Minimum 10 years experience in design, architecture or development in Analytics and Data Warehousing Have experience in solution design, solution governance and implementing end-to-end Big Data solutions using Hadoop eco-systems (Hive, HDFS, Pig, HBase, Flume, Kafka, Sqoop, YARN, Impala) Possess ability to produce semantic, conceptual, logical and physical data models using data modelling techniques such as Data Vault, Dimensional Modelling, 3NF, etc. Has the ability to design data warehousing and enterprise analytics-based solutions using Teradata or relevant data platforms Can demonstrate expertise in design patterns (FSLDM, IBM IFW DW) and data modelling frameworks including dimensional, star and non-dimensional schemas Possess commendable experience in consistently driving cost effective and technologically feasible solutions, while steering solution decisions across the group, to meet both operational and strategic goals is essential. Are adept with abilities to positively influence the adoption of new products, solutions and processes, to align with the existing Information Architectural design would be desirable Have Analytics & Data/BI Architecture appreciation and broad experience across all technology disciplines, including project management, IT strategy development and business process, information, application and business process. Have extensive experience with Teradata data warehouses and Big Data platforms on both On-Prim and Cloud platform. Extensive experience in large enterprise environments handling large volume of datasets with High Service Level Agreement(s) across various business functions/ units. Have experience leading discussions and presentations. Experience in driving decisions across groups of stakeholders.

Posted 3 months ago

Apply

4 - 9 years

10 - 19 Lacs

Pune

Work from Office

Naukri logo

About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Data Scientist Experience : 4 to 6 Yrs Location: Hyd, Pune, Gurgaon Job Summary: We are seeking a highly skilled and experienced Data Scientist with a strong background in machine learning, computer vision, natural language processing (NLP), generative artificial intelligence (AI), and cloud technologies. As a Data Scientist, you will be working for a client and driving the development and implementation of advanced data models and algorithms to extract meaningful insights from complex datasets. You will work closely with cross-functional teams, including engineers, researchers, and product managers, to design and deploy cutting-edge AI solutions that address our business needs. The ideal candidate will have a minimum of 3 years of real hands on experience in machine learning , with a proven track record of successfully delivering projects in computer vision, NLP, generative AI, and at least one cloud platform. Responsibilities: Drive the end-to-end development and deployment of machine learning models, from data collection and preprocessing to model training and evaluation, leveraging cloud-based infrastructure and services. Collaborate with cross-functional teams to understand business requirements and translate them into data science projects that deliver tangible value. Develop and implement advanced algorithms and models for computer vision, NLP, and generative AI, using techniques such as deep learning, reinforcement learning, and natural language understanding, while leveraging cloud-based machine learning platforms. Conduct thorough exploratory data analysis and feature engineering to extract meaningful insights and identify key patterns within complex datasets. Stay up-to-date with the latest advancements in machine learning, computer vision, NLP, generative AI, and cloud technologies, and apply the latest research findings to enhance existing models and algorithms. Evaluate and select appropriate cloud-based tools, frameworks, and services to support data science initiatives, ensuring scalability, reliability, and cost-efficiency. Collaborate with cloud engineering teams to optimize the deployment and management of machine learning models in cloud environments. Communicate complex technical concepts and findings to both technical and non-technical stakeholders through clear and concise presentations and reports. Qualifications: Bachelor's or master's degree in computer science, data science, or a related field. Speciality in Statistics is mandatory, a Ph.D. is a plus. Minimum of 3 years of experience in machine learning, with a focus on computer vision, NLP, generative AI, and cloud technologies. Strong expertise in programming languages such as Python, as well as proficiency in relevant libraries and frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Proven experience in developing and implementing machine learning models for computer vision tasks, including object detection, image segmentation, and image classification. Deep understanding of NLP techniques, such as natural language understanding, sentiment analysis, named entity recognition, and text classification. Familiarity with generative AI models, such as generative adversarial networks (GANs) and variational autoencoders (VAEs). Solid knowledge of deep learning architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformers. Experience working with large-scale datasets and data preprocessing techniques. Proficiency in data visualization and exploratory data analysis. Strong analytical and problem-solving skills, with the ability to think critically and creatively to solve complex challenges. Experience with at least one cloud platforms such as AWS, Azure, or Google Cloud, including knowledge of cloud-based machine learning services and infrastructure. Excellent communication and interpersonal skills, with the ability to collaborate effectively with both technical and non-technical stakeholders.

Posted 3 months ago

Apply

4 - 9 years

10 - 19 Lacs

Gurgaon

Work from Office

Naukri logo

About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: Data Scientist Experience : 4 to 6 Yrs Location: Hyd, Pune, Gurgaon Job Summary: We are seeking a highly skilled and experienced Data Scientist with a strong background in machine learning, computer vision, natural language processing (NLP), generative artificial intelligence (AI), and cloud technologies. As a Data Scientist, you will be working for a client and driving the development and implementation of advanced data models and algorithms to extract meaningful insights from complex datasets. You will work closely with cross-functional teams, including engineers, researchers, and product managers, to design and deploy cutting-edge AI solutions that address our business needs. The ideal candidate will have a minimum of 3 years of real hands on experience in machine learning , with a proven track record of successfully delivering projects in computer vision, NLP, generative AI, and at least one cloud platform. Responsibilities: Drive the end-to-end development and deployment of machine learning models, from data collection and preprocessing to model training and evaluation, leveraging cloud-based infrastructure and services. Collaborate with cross-functional teams to understand business requirements and translate them into data science projects that deliver tangible value. Develop and implement advanced algorithms and models for computer vision, NLP, and generative AI, using techniques such as deep learning, reinforcement learning, and natural language understanding, while leveraging cloud-based machine learning platforms. Conduct thorough exploratory data analysis and feature engineering to extract meaningful insights and identify key patterns within complex datasets. Stay up-to-date with the latest advancements in machine learning, computer vision, NLP, generative AI, and cloud technologies, and apply the latest research findings to enhance existing models and algorithms. Evaluate and select appropriate cloud-based tools, frameworks, and services to support data science initiatives, ensuring scalability, reliability, and cost-efficiency. Collaborate with cloud engineering teams to optimize the deployment and management of machine learning models in cloud environments. Communicate complex technical concepts and findings to both technical and non-technical stakeholders through clear and concise presentations and reports. Qualifications: Bachelor's or master's degree in computer science, data science, or a related field. Speciality in Statistics is mandatory, a Ph.D. is a plus. Minimum of 3 years of experience in machine learning, with a focus on computer vision, NLP, generative AI, and cloud technologies. Strong expertise in programming languages such as Python, as well as proficiency in relevant libraries and frameworks (e.g., TensorFlow, PyTorch, scikit-learn). Proven experience in developing and implementing machine learning models for computer vision tasks, including object detection, image segmentation, and image classification. Deep understanding of NLP techniques, such as natural language understanding, sentiment analysis, named entity recognition, and text classification. Familiarity with generative AI models, such as generative adversarial networks (GANs) and variational autoencoders (VAEs). Solid knowledge of deep learning architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformers. Experience working with large-scale datasets and data preprocessing techniques. Proficiency in data visualization and exploratory data analysis. Strong analytical and problem-solving skills, with the ability to think critically and creatively to solve complex challenges. Experience with at least one cloud platforms such as AWS, Azure, or Google Cloud, including knowledge of cloud-based machine learning services and infrastructure. Excellent communication and interpersonal skills, with the ability to collaborate effectively with both technical and non-technical stakeholders.

Posted 3 months ago

Apply

6 - 11 years

20 - 30 Lacs

Pune

Work from Office

Naukri logo

About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: GenAI Solution Architect/Lead Experience : 6+ Yrs Location: Pune Job Position (Title) GenAI Solution Architect/Lead Technical Skill Requirements :LLMs, Python, Cloud, Azure AI/Azure ML, Chatbot, MLOps, LLMOps, Data Science Responsibilities We are seeking a highly skilled and experienced Lead Data Scientist with a strong background in machine learning, computer vision, natural language processing (NLP), generative AI, LLM and cloud technologies. As the Lead Data Scientist, you will be responsible for leading our data science team on for the client and driving the design, architecture, development and implementation of Generative AI based applications. You will be responsible for providing solutions by continuously keeping up with the new developments in this space. You will work closely with cross-functional teams, including engineers, researchers, and product managers, to design and deploy cutting-edge GenAI solutions that address our business needs. The ideal candidate will have a minimum of 5 years of real hands on experience in machine learning/LLMs with expert level python coding skills, with a proven track record of successfully delivering projects in computer vision, NLP, generative AI, and at least one cloud platform. Responsibilities: Drive the end-to-end development and deployment of GenAI based applications, instrumenting model tuning and evaluation and managing model drifts leveraging cloud-based infrastructure and services. Collaborate with cross-functional teams to understand business requirements and translate them into data science projects that deliver tangible value. Stay up-to-date with the latest advancements in Chat/voice bot, NLP, generative AI, and cloud technologies, and apply the latest research findings to enhance existing models and algorithms. Evaluate and select appropriate cloud-based tools, frameworks, and services to support data science initiatives, ensuring scalability, reliability, and cost-efficiency. Collaborate with cloud engineering teams to optimize the deployment and management of machine learning models in cloud environments. Communicate complex technical concepts and findings to both technical and non-technical stakeholders through clear and concise presentations and reports. Required Skills : Bachelor's or master's degree in computer science, data science, or a related field. Minimum of 5 years of experience in machine learning, with a focus on chatbot, NLP, generative AI, and cloud technologies. Strong expertise in programming languages such as Python, as well as proficiency in relevant libraries and frameworks. Proven experience in developing and implementing LLM based applications. Deep understanding of GenAI techniques, such as RAGs, Agent base architecture, Prompt Engineering etc. Familiarity with generative AI models, such as generative adversarial networks (GANs) and variational autoencoders (VAEs). Solid knowledge of deep learning architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformers is a plus. Experience working with large-scale datasets and data pre-processing techniques. Strong analytical and problem-solving skills, with the ability to think critically and creatively to solve complex challenges. Experience with at least one cloud platforms such as AWS, Azure, or Google Cloud, including knowledge of cloud-based machine learning services and infrastructure. Azure preferable. Excellent communication and interpersonal skills, with the ability to collaborate effectively with both technical and non-technical stakeholders.

Posted 3 months ago

Apply

6 - 11 years

20 - 30 Lacs

Gurgaon

Work from Office

Naukri logo

About the Company : Headquartered in California, U.S.A., GSPANN provides consulting and IT services to global clients. We help clients transform how they deliver business value by helping them optimize their IT capabilities, practices, and operations with our experience in retail, high-technology, and manufacturing. With five global delivery centers and 1900+ employees, we provide the intimacy of a boutique consultancy with the capabilities of a large IT services firm. Role: GenAI Solution Architect/Lead Experience : 6+ Yrs Location: Pune Job Position (Title) GenAI Solution Architect/Lead Technical Skill Requirements :LLMs, Python, Cloud, Azure AI/Azure ML, Chatbot, MLOps, LLMOps, Data Science Responsibilities We are seeking a highly skilled and experienced Lead Data Scientist with a strong background in machine learning, computer vision, natural language processing (NLP), generative AI, LLM and cloud technologies. As the Lead Data Scientist, you will be responsible for leading our data science team on for the client and driving the design, architecture, development and implementation of Generative AI based applications. You will be responsible for providing solutions by continuously keeping up with the new developments in this space. You will work closely with cross-functional teams, including engineers, researchers, and product managers, to design and deploy cutting-edge GenAI solutions that address our business needs. The ideal candidate will have a minimum of 5 years of real hands on experience in machine learning/LLMs with expert level python coding skills, with a proven track record of successfully delivering projects in computer vision, NLP, generative AI, and at least one cloud platform. Responsibilities: Drive the end-to-end development and deployment of GenAI based applications, instrumenting model tuning and evaluation and managing model drifts leveraging cloud-based infrastructure and services. Collaborate with cross-functional teams to understand business requirements and translate them into data science projects that deliver tangible value. Stay up-to-date with the latest advancements in Chat/voice bot, NLP, generative AI, and cloud technologies, and apply the latest research findings to enhance existing models and algorithms. Evaluate and select appropriate cloud-based tools, frameworks, and services to support data science initiatives, ensuring scalability, reliability, and cost-efficiency. Collaborate with cloud engineering teams to optimize the deployment and management of machine learning models in cloud environments. Communicate complex technical concepts and findings to both technical and non-technical stakeholders through clear and concise presentations and reports. Required Skills : Bachelor's or master's degree in computer science, data science, or a related field. Minimum of 5 years of experience in machine learning, with a focus on chatbot, NLP, generative AI, and cloud technologies. Strong expertise in programming languages such as Python, as well as proficiency in relevant libraries and frameworks. Proven experience in developing and implementing LLM based applications. Deep understanding of GenAI techniques, such as RAGs, Agent base architecture, Prompt Engineering etc. Familiarity with generative AI models, such as generative adversarial networks (GANs) and variational autoencoders (VAEs). Solid knowledge of deep learning architectures, including convolutional neural networks (CNNs), recurrent neural networks (RNNs), and transformers is a plus. Experience working with large-scale datasets and data pre-processing techniques. Strong analytical and problem-solving skills, with the ability to think critically and creatively to solve complex challenges. Experience with at least one cloud platforms such as AWS, Azure, or Google Cloud, including knowledge of cloud-based machine learning services and infrastructure. Azure preferable. Excellent communication and interpersonal skills, with the ability to collaborate effectively with both technical and non-technical stakeholders.

Posted 3 months ago

Apply

4 - 8 years

6 - 10 Lacs

Chennai, Bengaluru, Hyderabad

Work from Office

Naukri logo

You want more out of a career. A place to share your ideas freely even if theyre daring or different. Where the true you can learn, grow, and thrive. At Verizon, we power and empower how people live, work and play by connecting them to what brings them joy. We do what we love driving innovation, creativity, and impact in the world. Our V Team is a community of people who anticipate, lead, and believe that listening is where learning begins. In crisis and in celebration, we come together lifting our communities and building trust in how we show up, everywhere & always. Want in? Join the V Team Life. What youll be doing... You will be responsible for migrating our feature set preparation code from BTEQ scripts to hive compatible hql or spark in an optimized way. You will incorporate possible automation techniques to hasten this migration project. You will make sure migration is delivered in phases on time with high code quality and standards followed. Migrating bteq scripts to hql or spark. Documenting the complete migration end to end in VZ Grid. Identifying, designing, and implementing internal process improvements by automating manual processes, optimizing data delivery and re-designing infrastructure for greater scalability, etc. Building the infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and big data technologies. Working with partners including the Executive, Product, Data and Design teams to assist with data-related technical issues and supporting their data infrastructure needs. Rewriting our big data ETL pipeline in BTEQ to hql/spark, to create datasets for our modeling efforts. Wrangling with raw data from large, diverse data sets from our distribution partners. Mentoring team members on need basis. What were looking for... You are excited to work in a cloud environment, supporting development and deployment in the Verizon Grid. You are self-directed and comfortable supporting the data needs of multiple teams, systems and products. You are excited by the prospect of optimizing or even re-designing the architecture to support our next generation of products and dataset creation for modelling purposes. You'll need to have: Bachelor's degree or four or more years of work experience. Four or more years of relevant work experience. Experience with big data tools: Hadoop Eco System ( Hive, Pig, OOZie, Spark, Kafka, Elastic search, Kibana). Working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working with a variety of databases. Experience in scripting languages: Unxi shell scripts, Python or Scala. Even better if you have one or more of the following: Masters degree. Experience with relational SQL and NoSQL databases, including Postgres and Cassandra. Experience with data pipeline and workflow management tools: NIFI. Experience with stream-processing systems: Spark-Streaming, Storm etc. Experience transforming complex data into easily understandable and actionable information. Experience working in a fast-paced environment. Ability to quickly adapt to changing priorities.

Posted 3 months ago

Apply

5 - 10 years

10 - 20 Lacs

Chennai

Work from Office

Naukri logo

Min 5-8 years of experience in Hadoop/big data technologies. Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr). Hands-on experience with Python/Pyspark. Design, develop, and optimize ETL pipelines using Python and PySpark to process and transform large-scale datasets, ensuring performance and scalability on big data platforms. Implement big data solutions for Retail banking use cases such Risk analysis, Management Reporting (time series, Vintage curves, Executive summary) and regulatory reporting, while maintaining data accuracy and compliance standards. Collaborate with cross-functional teams to integrate data from various sources, troubleshoot production issues, and ensure efficient, reliable data processing operations.

Posted 3 months ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies