Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Req ID: 327063 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a python,pySpark,ApacheSpark to join our team in Hyderabad, Telangana (IN-TG), India (IN). "At NTT DATA, we know that with the right people on board, anything is possible. The quality, integrity, and commitment of our employees are key factors in our company’s growth, market presence and our ability to help our clients stay a step ahead of the competition. By hiring, the best people and helping them grow both professionally and personally, we ensure a bright future for NTT DATA and for the people who work here "NTT DATA Services currently seeks Python Developer to join our team in Hyderabad Design and build ETL solutions with experience in data engineering, data modelling in large-scale in both batch and real-time environments. Skills required: Python, PySpark, Apache Spark, Unix Shell Scripting, GCP, Big query, MongoDB, Kafka event streaming, API development, CI/CD. For software engineering 3: 6+yrs Mandate :Apache spark with python, pyspark, GCP with big query, database Secondary mandate: Abinitio ETL Good to have : Unix shell scripting & Kafka event streaming" About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you'd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less
Posted 5 days ago
0.0 - 4.0 years
9 - 13 Lacs
Pune
Work from Office
Project description You will be working in a global team that manages and performs a global technical control. You'll be joining Assets Management team which is looking after asset management data foundation and operates a set of in-house developed tooling. As an IT engineer you'll play an important role in ensuring the development methodology is followed, and lead technical design discussions with the architects. Our culture centers around partnership with our businesses, transparency, accountability and empowerment, and passion for the future. Responsibilities Design, develop, and maintain scalable data solutions using Starburst. Collaborate with cross-functional teams to integrate Starburst with existing data sources and tools. Optimize query performance and ensure data security and compliance. Implement monitoring and alerting systems for data platform health. Stay updated with the latest developments in data engineering and analytics. Skills Must have Bachelor's degree or Masters in a related technical field; or equivalent related professional experience. Prior experience as a Software Engineer applying new engineering principles to improve existing systems including leading complex, well defined projects. Strong knowledge of Big-Data Languages including: SQL Hive Spark/Pyspark Presto Python Strong knowledge of Big-Data Platforms, such as:o The Apache Hadoop ecosystemo AWS EMRo Qubole or Trino/Starburst Good knowledge and experience in cloud platforms such as AWS, GCP, or Azure. Continuous learner with the ability to apply previous experience and knowledge to quickly master new technologies. Demonstrates the ability to select among technology available to implement and solve for need. Able to understand and design moderately complex systems. Understanding of testing and monitoring tools. Ability to test, debug, fix issues within established SLAs. Experience with data visualization tools (e.g., Tableau, Power BI). Understanding of data governance and compliance standards. Nice to have Data Architecture & EngineeringDesign and implement efficient and scalable data warehousing solutions using Azure Databricks and Microsoft Fabric. Business Intelligence & Data VisualizationCreate insightful Power BI dashboards to help drive business decisions. Other Languages EnglishC1 Advanced Seniority Senior
Posted 5 days ago
5.0 - 10.0 years
11 - 15 Lacs
Pune
Work from Office
Project description You'll be working in the GM Business Analytics team located in Pune. The successful candidate will be a member of the global Distribution team, which has team members in London and Pune. We work as part of a global team providing analytical solutions for IB distribution/sales people. Solutions deployed should be extensible globally with minimal localization. Responsibilities Are you passionate about data and analyticsAre you keen to be part of the journey to modernize a data warehouse/ analytics suite of application(s). Do you take pride in the quality of software delivered for each development iteration We're looking for someone like that to join us and be a part of a high-performing team on a high-profile project. solve challenging problems in an elegant way master state-of-the-art technologies build a highly responsive and fast updating application in an Agile & Lean environment apply best development practices and effectively utilize technologies work across the full delivery cycle to ensure high-quality delivery write high-quality code and adhere to coding standards work collaboratively with diverse team(s) of technologists You are: Curious and collaborative, comfortable working independently, as well as in a team Focused on delivery to the business Strong in analytical skills. For example, the candidate must understand the key dependencies among existing systems in terms of the flow of data among them. It is essential that the candidate learns to understand the 'big picture' of how IB industry/business functions. Able to quickly absorb new terminology and business requirements Already strong in analytical tools, technologies, platforms, etc. The candidate must also demonstrate a strong desire for learning and self-improvement. Open to learning home-grown technologies, support current state infrastructure and help drive future state migrations. imaginative and creative with newer technologies Able to accurately and pragmatically estimate the development effort required for specific objectives You will have the opportunity to work under minimal supervision to understand local and global system requirements, design and implement the required functionality/bug fixes/enhancements. You will be responsible for components that are developed across the whole team and deployed globally. You will also have the opportunity to provide third-line support to the application's global user community, which will include assisting dedicated support staff and liaising with the members of other development teams directly, some of which will be local and some remote. Skills Must have A bachelor's or master's degree, preferably in Information Technology or a related field (computer science, mathematics, etc.), focusing on data engineering. 5+ years of relevant experience as a data engineer in Big Data is required. Strong Knowledge of programming languages (Python / Scala) and Big Data technologies (Spark, Databricks or equivalent) is required. Strong experience in executing complex data analysis and running complex SQL/Spark queries. Strong experience in building complex data transformations in SQL/Spark. Strong knowledge of Database technologies is required. Strong knowledge of Azure Cloud is advantageous. Good understanding and experience with Agile methodologies and delivery. Strong communication skills with the ability to build partnerships with stakeholders. Strong analytical, data management and problem-solving skills. Nice to have Experience working on the QlikView tool Understanding of QlikView scripting and data model Other Languages EnglishC1 Advanced Seniority Senior
Posted 5 days ago
5.0 - 9.0 years
9 - 13 Lacs
Gurugram
Work from Office
At Capgemini Invent, we believe difference drives change. As inventive transformation consultants, we blend our strategic, creative and scientific capabilities,collaborating closely with clients to deliver cutting-edge solutions. Join us to drive transformation tailored to our client's challenges of today and tomorrow.Informed and validated by science and data. Superpowered by creativity and design. All underpinned by technology created with purpose. Your role As a Senior Data Scientist, you are expected to develop and implement Artificial Intelligence based solutions across various disciplines for the Intelligent Industry vertical of Capgemini Invent. You are expected to work as an individual contributor or along with a team to help design and develop ML/NLP models as per the requirement. You will work closely with the Product Owner, Systems Architect and other key stakeholders right from conceptualization till the implementation of the project. You should take ownership while understanding the client requirement, the data to be used, security & privacy needs and the infrastructure to be used for the development and implementation. The candidate will be responsible for executing data science projects independently to deliver business outcomes and is expected to demonstrate domain expertise, develop, and execute program plans and proactively solicit feedback from stakeholders to identify improvement actions. This role requires a strong technical background, excellent problem-solving skills, and the ability to work collaboratively with stakeholders from different functional and business teams. The role also requires the candidate to collaborate on ML asset creation and eager to learn and impart trainings to fellow data science professionals. We expect thought leadership from the candidate, especially on proposing to build a ML/NLP asset based on expected industry requirements. Experience in building Industry specific (e.g. Manufacturing, R&D, Supply Chain, Life Sciences etc), production ready AI Models using microservices and web-services is a plus. Programming Languages Python NumPy, SciPy, Pandas, MatPlotLib, Seaborne Databases RDBMS (MySQL, Oracle etc.), NoSQL Stores (HBase, Cassandra etc.) ML/DL Frameworks SciKitLearn, TensorFlow (Keras), PyTorch, Big data ML Frameworks - Spark (Spark-ML, Graph-X), H2O Cloud Azure/AWS/GCP Your Profile Predictive and Prescriptive modelling using Statistical and Machine Learning algorithms including but not limited to Time Series, Regression, Trees, Ensembles, Neural-Nets (Deep & Shallow CNN, LSTM, Transformers etc.). Experience with open-source OCR engines like Tesseract, Speech recognition, Computer Vision, face recognition, emotion detection etc. is a plus. Unsupervised learning Market Basket Analysis, Collaborative Filtering, Dimensionality Reduction, good understanding of common matrix decomposition approaches like SVD. Various Clustering approaches Hierarchical, Centroid-based, Density-based, Distribution-based, Graph-based clustering like Spectral. NLP Information Extraction, Similarity Matching, Sentiment Analysis, Text Clustering, Semantic Analysis, Document Summarization, Context Mapping/Understanding, Intent Classification, Word Embeddings, Vector Space Models, experience with libraries like NLTK, Spacy, Stanford Core-NLP is a plus. Usage of Transformers for NLP and experience with LLMs like (ChatGPT, Llama) and usage of RAGs (vector stores like LangChain & LangGraps), building Agentic AI applications. Model Deployment ML pipeline formation, data security and scrutiny check and ML-Ops for productionizing a built model on-premises and on cloud. Required Qualifications Masters degree in a quantitative field such as Mathematics, Statistics, Machine Learning, Computer Science or Engineering or a bachelors degree with relevant experience. Good experience in programming with languages such as Python/Java/Scala, SQL and experience with data visualization tools like Tableau or Power BI. Preferred Experience Experienced in Agile way of working, manage team effort and track through JIRA Experience in Proposal, RFP, RFQ and pitch creations and delivery to the big forum. Experience in POC, MVP, PoV and assets creations with innovative use cases Experience working in a consulting environment is highly desirable. Presupposition High Impact client communication The job may also entail sitting as well as working at a computer for extended periods of time. Candidates should be able to effectively communicate by telephone, email, and face to face. What you will love about working here We recognize the significance of flexible work arrangements to provide support. Be it remote work, or flexible work hours, you will get an environment to maintain healthy work life balance. At the heart of our mission is your career growth. Our array of career growth programs and diverse professions are crafted to support you in exploring a world of opportunities. Equip yourself with valuable certifications in the latest technologies such as Generative AI.
Posted 5 days ago
2.0 - 5.0 years
4 - 8 Lacs
Kochi
Work from Office
The ability to be a team player The ability and skill to train other people in procedural and technical topics Strong communication and collaboration skills Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Able to write complex SQL queries ; Having experience in Azure Databricks Preferred technical and professional experience Excellent communication and stakeholder management skills
Posted 5 days ago
6.0 - 9.0 years
4 - 8 Lacs
Gurugram
Work from Office
esign, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Work together with data scientists and analysts to understand the needs for data and create effective data workflows. Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and resolving data pipeline problems will guarantee consistency and availability of the data. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.
Posted 5 days ago
6.0 - 9.0 years
4 - 8 Lacs
Hyderabad
Work from Office
Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in Azure. Work together with data scientists and analysts to understand the needs for data and create effective data workflows. Create and maintain data storage solutions including Azure SQL Database, Azure Data Lake, and Azure Blob Storage. Utilizing Azure Data Factory or comparable technologies, create and maintain ETL (Extract, Transform, Load) operations. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and resolving data pipeline problems will guarantee consistency and availability of the data. Works in the area of Software Engineering, which encompasses the development, maintenance and optimization of software solutions/applications.1. Applies scientific methods to analyse and solve software engineering problems.2. He/she is responsible for the development and application of software engineering practice and knowledge, in research, design, development and maintenance.3. His/her work requires the exercise of original thought and judgement and the ability to supervise the technical and administrative work of other software engineers.4. The software engineer builds skills and expertise of his/her software engineering discipline to reach standard software engineer skills expectations for the applicable role, as defined in Professional Communities.5. The software engineer collaborates and acts as team player with other software engineers and stakeholders.
Posted 5 days ago
3.0 - 7.0 years
12 - 16 Lacs
Kochi
Work from Office
We are seeking a highly skilled Advanced Analytics Specialist to join our dynamic team. The successful candidate will be responsible for leveraging advanced analytics techniques to derive actionable insights, inform business decisions, and drive strategic initiatives. This role requires a deep understanding of data analysis, statistical modeling, machine learning, and data visualization. In this role, you will be responsible for architecting and delivering AI solutions using cutting-edge technologies, with a strong focus on foundation models and large language models. You will work closely with customers, product managers, and development teams to understand business requirements and design custom AI solutions that address complex challenges. Experience with tools like Github Copilot, Amazon Code Whisperer etc. is desirable. Success is our passion, and your accomplishments will reflect this, driving your career forward, propelling your team to success, and helping our clients to thrive. Day-to-Day Duties: Proof of Concept (POC) DevelopmentDevelop POCs to validate and showcase the feasibility and effectiveness of the proposed AI solutions. Collaborate with development teams to implement and iterate on POCs, ensuring alignment with customer requirements and expectations. Help in showcasing the ability of Gen AI code assistant to refactor/rewrite and document code from one language to another, particularly COBOL to JAVA through rapid prototypes/ PoC Documentation and Knowledge SharingDocument solution architectures, design decisions, implementation details, and lessons learned. Create technical documentation, white papers, and best practice guides. Contribute to internal knowledge sharing initiatives and mentor new team members. Industry Trends and InnovationStay up to date with the latest trends and advancements in AI, foundation models, and large language models. Evaluate emerging technologies, tools, and frameworks to assess their potential impact on solution design and implementation Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Develop and implement advanced analytical models and algorithms to solve complex business problems, analyze large datasets to uncover trends, patterns, and insights that drive business performance. Collaborate with cross-functional teams to identify key business challenges and opportunities, Create and maintain data pipelines and workflows to ensure the accuracy and integrity of data, Design and deliver insightful reports and dashboards to communicate findings to stakeholders. Stay up to date with the latest advancements in analytics, machine learning, and data science. Provide technical expertise and mentorship to junior team members. QualificationsBachelor’s or master’s degree in data science, Statistics, Mathematics, Computer Science, or a related field. Proven experience in advanced analytics, data science, or a similar role. Proficiency in programming languages such as Python, R, or SQL. Experience with data visualization tools like Tableau, Power BI, or similar. Strong understanding of statistical modelling and machine learning algorithms. Excellent analytical, problem-solving, and critical thinking skills. Ability to communicate complex analytical concepts to non-technical stakeholders. Experience with big data technologies (e.g., Hadoop, Spark) is a plus Preferred technical and professional experience Familiarity with cloud-based analytics platforms (e.g., AWS, Azure). Knowledge of natural language processing (NLP) and deep learning techniques. Experience with project management and agile methodologies
Posted 5 days ago
2.0 - 5.0 years
14 - 17 Lacs
Mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 5 days ago
2.0 - 5.0 years
13 - 17 Lacs
Hyderabad
Work from Office
As an Associate Software Developer at IBM you will harness the power of data to unveil captivating stories and intricate patterns. You'll contribute to data gathering, storage, and both batch and real-time processing. Collaborating closely with diverse teams, you'll play an important role in deciding the most suitable data management systems and identifying the crucial data required for insightful analysis. As a Data Engineer, you'll tackle obstacles related to database integration and untangle complex, unstructured data sets. In this role, your responsibilities may include: Implementing and validating predictive models as well as creating and maintain statistical models with a focus on big data, incorporating a variety of statistical and machine learning techniques Designing and implementing various enterprise seach applications such as Elasticsearch and Splunk for client requirements Work in an Agile, collaborative environment, partnering with other scientists, engineers, consultants and database administrators of all backgrounds and disciplines to bring analytical rigor and statistical methods to the challenges of predicting behaviors. Build teams or writing programs to cleanse and integrate data in an efficient and reusable manner, developing predictive or prescriptive models, and evaluating modeling results Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Develop/Convert the database (Hadoop to GCP) of the specific objects (tables, views, procedures, functions, triggers, etc.) from one database to another database platform Implementation of a specific Data Replication mechanism (CDC, file data transfer, bulk data transfer, etc.). Expose data as API Participation in modernization roadmap journey Analyze discovery and analysis outcomes Lead discovery and analysis workshops/playbacks Identification of the applications dependencies, source, and target database incompatibilities. Analyze the non-functional requirements (security, HA, RTO/RPO, storage, compute, network, performance bench, etc.). Prepare the effort estimates, WBS, staffing plan, RACI, RAID etc. . Leads the team to adopt right tools for various migration and modernization method Preferred technical and professional experience You thrive on teamwork and have excellent verbal and written communication skills. Ability to communicate with internal and external clients to understand and define business needs, providing analytical solutions Ability to communicate results to technical and non-technical audiences
Posted 5 days ago
6.0 - 8.0 years
5 - 10 Lacs
Kochi
Work from Office
As a consultant at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 6-8 years of overall IT experience with minimum 4 years in python development Has good experience on Python with Spark to write reusable codes and framework Write structured, clean, reusable, and testable code using Python Should have good understanding of Database design with ability to write complex Sql queries Excellent knowledge on python and API frameworks (Django. Flask) Implement well-designed, high-performance applications for the server-side Knowledge of the threading functions of Python Preferred technical and professional experience Should have good understanding of Database design with ability to write complex Sql queries Excellent knowledge on python and API frameworks (Django. Flask) Implement well-designed, high-performance applications for the server-side Knowledge of the threading functions of Pytho
Posted 5 days ago
5.0 - 10.0 years
14 - 17 Lacs
Navi Mumbai
Work from Office
As a Big Data Engineer, you will develop, maintain, evaluate, and test big data solutions. You will be involved in data engineering activities like creating pipelines/workflows for Source to Target and implementing solutions that tackle the clients needs. Your primary responsibilities include: Design, build, optimize and support new and existing data models and ETL processes based on our clients business requirements. Build, deploy and manage data infrastructure that can adequately handle the needs of a rapidly growing data driven organization. Coordinate data access and security to enable data scientists and analysts to easily access to data whenever they need too Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Must have 5+ years exp in Big Data -Hadoop Spark -Scala ,Python Hbase, Hive Good to have Aws -S3, athena ,Dynomo DB, Lambda, Jenkins GIT Developed Python and pyspark programs for data analysis. Good working experience with python to develop Custom Framework for generating of rules (just like rules engine). Developed Python code to gather the data from HBase and designs the solution to implement using Pyspark. Apache Spark DataFrames/RDD's were used to apply business transformations and utilized Hive Context objects to perform read/write operations Preferred technical and professional experience Understanding of Devops. Experience in building scalable end-to-end data ingestion and processing solutions Experience with object-oriented and/or functional programming languages, such as Python, Java and Scala
Posted 5 days ago
5.0 - 7.0 years
13 - 17 Lacs
Bengaluru
Work from Office
A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe.You'll work with visionaries across multiple industries to improve the hybrid cloud and Al journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio; including Software and Red Hat. In your role, you will be responsible for: Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies. End to End functional knowledge of the data pipeline/transformation implementation that the candidate has done, should understand the purpose/KPIs for which data transformation was done Preferred technical and professional experience Experience with AEM Core Technologies OSGI Services, Apache Sling ,Granite Framework., Java Content Repository API, Java 8+, Localization Familiarity with building tools, Jenkin and Maven , Knowledge of version control tools, especially Git, Knowledge of Patterns and Good Practices to design and develop quality and clean code, Knowledge of HTML, CSS, and JavaScript , jQuery Familiarity with task management, bug tracking, and collaboration tools like JIRA and Confluence
Posted 5 days ago
3.0 - 7.0 years
11 - 15 Lacs
Mumbai
Work from Office
A Data Platform Engineer specialises in the design, build, and maintenance of cloud-based data infrastructure and platforms for data-intensive applications and services. They develop Infrastructure as Code and manage the foundational systems and tools for efficient data storage, processing, and management. This role involves architecting robust and scalable cloud data infrastructure, including selecting and implementing suitable storage solutions, data processing frameworks, and data orchestration tools. Additionally, a Data Platform Engineer ensures the continuous evolution of the data platform to meet changing data needs and leverage technological advancements, while maintaining high levels of data security, availability, and performance. They are also tasked with creating and managing processes and tools that enhance operational efficiency, including optimising data flow and ensuring seamless data integration, all of which are essential for enabling developers to build, deploy, and operate data-centric applications efficiently. - Grade Specific An expert on the principles and practices associated with data platform engineering, particularly within cloud environments, and demonstrates proficiency in specific technical areas related to cloud-based data infrastructure, automation, and scalability.Key responsibilities encompass:Team Leadership and ManagementSupervising a team of platform engineers, with a focus on team dynamics and the efficient delivery of cloud platform solutions.Technical Guidance and Decision-MakingProviding technical leadership and making pivotal decisions concerning platform architecture, tools, and processes. Balancing hands-on involvement with strategic oversight.Mentorship and Skill DevelopmentGuiding team members through mentorship, enhancing their technical proficiencies, and nurturing a culture of continual learning and innovation in platform engineering practices.In-Depth Technical ProficiencyPossessing a comprehensive understanding of platform engineering principles and practices, and demonstrating expertise in crucial technical areas such as cloud services, automation, and system architecture.Community ContributionMaking significant contributions to the development of the platform engineering community, staying informed about emerging trends, and applying this knowledge to drive enhancements in capability. Skills (competencies)
Posted 5 days ago
2.0 - 5.0 years
14 - 17 Lacs
Mumbai
Work from Office
Experience with Scala object-oriented/object function Strong SQL background. Experience in Spark SQL, Hive, Data Engineer. SQL Experience with data pipelines & Data Lake Strong background in distributed comp. Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise SQL Experience with data pipelines & Data Lake Strong background in distributed comp Experience with Scala object-oriented/object function Strong SQL background Preferred technical and professional experience Core Scala Development Experience
Posted 5 days ago
5.0 - 7.0 years
14 - 18 Lacs
Bengaluru
Work from Office
Work with broader team to build, analyze and improve the AI solutions. You will also work with our software developers in consuming different enterprise applications Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Resource should have 5-7 years of experience. Sound knowledge of Python and should know how to use the ML related services. Proficient in Python with focus on Data Analytics Packages. Strategy Analyse large, complex data sets and provide actionable insights to inform business decisions. Strategy Design and implementing data models that help in identifying patterns and trends. Collaboration Work with data engineers to optimize and maintain data pipelines. Perform quantitative analyses that translate data into actionable insights and provide analytical, data-driven decision-making. Identify and recommend process improvements to enhance the efficiency of the data platform. Develop and maintain data models, algorithms, and statistical models Preferred technical and professional experience Experience with conversation analytics. Experience with cloud technologies Experience with data exploration tools such as Tableu
Posted 5 days ago
2.0 - 5.0 years
14 - 17 Lacs
Pune
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 5 days ago
5.0 - 7.0 years
13 - 17 Lacs
Bengaluru
Work from Office
Skilled Multiple GCP services - GCS, BigQuery, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer etc. Must have Python and SQL work experience & Proactive, collaborative and ability to respond to critical situation Ability to analyse data for functional business requirements & front face customer Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise 5 to 7 years of relevant experience working as technical analyst with Big Query on GCP platform. Skilled in multiple GCP services - GCS, Cloud SQL, Dataflow, Pub/Sub, Cloud Run, Workflow, Composer, Error reporting, Log explorer You love collaborative environments that use agile methodologies to encourage creative design thinking and find innovative ways to develop with cutting edge technologies Ambitious individual who can work under their own direction towards agreed targets/goals and with creative approach to work Preferred technical and professional experience Create up to 3 bullets maxitive individual with an ability to manage change and proven time management Proven interpersonal skills while contributing to team effort by accomplishing related results as needed Up-to-date technical knowledge by attending educational workshops, reviewing publications (encouraging then to focus on required skills)
Posted 5 days ago
2.0 - 5.0 years
14 - 17 Lacs
Navi Mumbai
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 5 days ago
2.0 - 5.0 years
14 - 17 Lacs
Bengaluru
Work from Office
As a Data Engineer at IBM, you'll play a vital role in the development, design of application, provide regular support/guidance to project teams on complex coding, issue resolution and execution. Your primary responsibilities include: Lead the design and construction of new solutions using the latest technologies, always looking to add business value and meet user requirements. Strive for continuous improvements by testing the build solution and working under an agile framework. Discover and implement the latest technologies trends to maximize and build creative solutions Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Experience with Apache Spark (PySpark)In-depth knowledge of Spark’s architecture, core APIs, and PySpark for distributed data processing. Big Data TechnologiesFamiliarity with Hadoop, HDFS, Kafka, and other big data tools. Data Engineering Skills: Strong understanding of ETL pipelines, data modeling, and data warehousing concepts. Strong proficiency in PythonExpertise in Python programming with a focus on data processing and manipulation. Data Processing FrameworksKnowledge of data processing libraries such as Pandas, NumPy. SQL ProficiencyExperience writing optimized SQL queries for large-scale data analysis and transformation. Cloud PlatformsExperience working with cloud platforms like AWS, Azure, or GCP, including using cloud storage systems Preferred technical and professional experience Define, drive, and implement an architecture strategy and standards for end-to-end monitoring. Partner with the rest of the technology teams including application development, enterprise architecture, testing services, network engineering, Good to have detection and prevention tools for Company products and Platform and customer-facing
Posted 5 days ago
4.0 - 9.0 years
12 - 16 Lacs
Kochi
Work from Office
As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and Azure Cloud Data Platform Responsibilities: Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark and Hive, Hbase or other NoSQL databases on Azure Cloud Data Platform or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / Azure eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala; Minimum 3 years of experience on Cloud Data Platforms on Azure; Experience in DataBricks / Azure HDInsight / Azure Data Factory, Synapse, SQL Server DB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred technical and professional experience Certification in Azure and Data Bricks or Cloudera Spark Certified developers
Posted 5 days ago
4.0 - 9.0 years
14 - 18 Lacs
Bengaluru
Work from Office
Job Title - Retail Specialized Data Scientist Level 9 SnC GN Data & AI Management Level:09 - Consultant Location:Bangalore / Gurgaon / Mumbai / Chennai / Pune / Hyderabad / Kolkata Must have skills: A solid understanding of retail industry dynamics, including key performance indicators (KPIs) such as sales trends, customer segmentation, inventory turnover, and promotions. Strong ability to communicate complex data insights to non-technical stakeholders, including senior management, marketing, and operational teams. Meticulous in ensuring data quality, accuracy, and consistency when handling large, complex datasets. Gather and clean data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. Strong proficiency in Python for data manipulation, statistical analysis, and machine learning (libraries like Pandas, NumPy, Scikit-learn). Expertise in supervised and unsupervised learning algorithms Use advanced analytics to optimize pricing strategies based on market demand, competitor pricing, and customer price sensitivity. Good to have skills: Familiarity with big data processing platforms like Apache Spark, Hadoop, or cloud-based platforms such as AWS or Google Cloud for large-scale data processing. Experience with ETL (Extract, Transform, Load) processes and tools like Apache Airflow to automate data workflows. Familiarity with designing scalable and efficient data pipelines and architecture. Experience with tools like Tableau, Power BI, Matplotlib, and Seaborn to create meaningful visualizations that present data insights clearly. Job Summary : The Retail Specialized Data Scientist will play a pivotal role in utilizing advanced analytics, machine learning, and statistical modeling techniques to help our retail business make data-driven decisions. This individual will work closely with teams across marketing, product management, supply chain, and customer insights to drive business strategies and innovations. The ideal candidate should have experience in retail analytics and the ability to translate data into actionable insights. Roles & Responsibilities: Leverage Retail Knowledge:Utilize your deep understanding of the retail industry (merchandising, customer behavior, product lifecycle) to design AI solutions that address critical retail business needs. Gather and clean data from various retail sources, such as sales transactions, customer interactions, inventory management, website traffic, and marketing campaigns. Apply machine learning algorithms, such as classification, clustering, regression, and deep learning, to enhance predictive models. Use AI-driven techniques for personalization, demand forecasting, and fraud detection. Use advanced statistical methods help optimize existing use cases and build new products to serve new challenges and use cases. Stay updated on the latest trends in data science and retail technology. Collaborate with executives, product managers, and marketing teams to translate insights into business actions. Professional & Technical Skills : Strong analytical and statistical skills. Expertise in machine learning and AI. Experience with retail-specific datasets and KPIs. Proficiency in data visualization and reporting tools. Ability to work with large datasets and complex data structures. Strong communication skills to interact with both technical and non-technical stakeholders. A solid understanding of the retail business and consumer behavior. Programming Languages:Python, R, SQL, Scala Data Analysis Tools:Pandas, NumPy, Scikit-learn, TensorFlow, Keras Visualization Tools:Tableau, Power BI, Matplotlib, Seaborn Big Data Technologies:Hadoop, Spark, AWS, Google Cloud Databases:SQL, NoSQL (MongoDB, Cassandra) Additional Information: - Qualification Experience: Minimum 3 year(s) of experience is required Educational Qualification: Bachelors or Master's degree in Data Science, Statistics, Computer Science, Mathematics, or a related field.
Posted 5 days ago
5.0 - 7.0 years
5 - 9 Lacs
Kochi
Work from Office
Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Databricks including Spark-based ETL, Delta Lake Good to have skills:Pyspark Job Summary We are seeking a highly skilled and experienced Senior Data Engineer to join our growing Data and Analytics team. The ideal candidate will have deep expertise in Databricks and cloud data warehousing, with a proven track record of designing and building scalable data pipelines, optimizing data architectures, and enabling robust analytics capabilities. This role involves working collaboratively with cross-functional teams to ensure the organization leverages data as a strategic asset. Your responsibilities will include: Roles and Responsibilities Design, build, and maintain scalable data pipelines and ETL processes using Databricks and other modern tools. Architect, implement, and manage cloud-based data warehousing solutions on Databricks (Lakehouse Architecture) Develop and maintain optimized data lake architectures to support advanced analytics and machine learning use cases. Collaborate with stakeholders to gather requirements, design solutions, and ensure high-quality data delivery. Optimize data pipelines for performance and cost efficiency. Implement and enforce best practices for data governance, access control, security, and compliance in the cloud. Monitor and troubleshoot data pipelines to ensure reliability and accuracy. Lead and mentor junior engineers, fostering a culture of continuous learning and innovation. Excellent communication skills Ability to work independently and along with client based out of western Europe. Professional and Technical Skills 3.5-5 years of experience in Data Engineering roles with a focus on cloud platforms. Proficiency in Databricks, including Spark-based ETL, Delta Lake, and SQL. Strong experience with one or more cloud platforms (AWS preferred). Handson Experience with Delta lake, Unity Catalog, and Lakehouse architecture concepts. Strong programming skills in Python and SQL; experience with Pyspark a plus. Solid understanding of data modeling concepts and practices (e.g., star schema, dimensional modeling). Knowledge of CI/CD practices and version control systems (e.g., Git). Familiarity with data governance and security practices, including GDPR and CCPA compliance. Additional Information Experience with Airflow or similar workflow orchestration tools. Exposure to machine learning workflows and MLOps. Certification in Databricks, AWS Familiarity with data visualization tools such as Power BI (do not remove the hyperlink)Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)
Posted 5 days ago
2.0 - 3.0 years
5 - 9 Lacs
Kochi
Work from Office
Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture) Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)
Posted 5 days ago
2.0 - 3.0 years
5 - 9 Lacs
Kochi
Work from Office
Job Title - + + Management Level: Location:Kochi, Coimbatore, Trivandrum Must have skills:Python/Scala, Pyspark/Pytorch Good to have skills:Redshift Job Summary Youll capture user requirements and translate them into business and digitally enabled solutions across a range of industries. Your responsibilities will include: Roles and Responsibilities Designing, developing, optimizing, and maintaining data pipelines that adhere to ETL principles and business goals Solving complex data problems to deliver insights that helps our business to achieve their goals. Source data (structured unstructured) from various touchpoints, format and organize them into an analyzable format. Creating data products for analytics team members to improve productivity Calling of AI services like vision, translation etc. to generate an outcome that can be used in further steps along the pipeline. Fostering a culture of sharing, re-use, design and operational efficiency of data and analytical solutions Preparing data to create a unified database and build tracking solutions ensuring data quality Create Production grade analytical assets deployed using the guiding principles of CI/CD. Professional and Technical Skills Expert in Python, Scala, Pyspark, Pytorch, Javascript (any 2 at least) Extensive experience in data analysis (Big data- Apache Spark environments), data libraries (e.g. Pandas, SciPy, Tensorflow, Keras etc.), and SQL. 2-3 years of hands-on experience working on these technologies. Experience in one of the many BI tools such as Tableau, Power BI, Looker. Good working knowledge of key concepts in data analytics, such as dimensional modeling, ETL, reporting/dashboarding, data governance, dealing with structured and unstructured data, and corresponding infrastructure needs. Worked extensively in Microsoft Azure (ADF, Function Apps, ADLS, Azure SQL), AWS (Lambda,Glue,S3), Databricks analytical platforms/tools, Snowflake Cloud Datawarehouse. Additional Information Experience working in cloud Data warehouses like Redshift or Synapse Certification in any one of the following or equivalent AWS- AWS certified data Analytics- Speciality Azure- Microsoft certified Azure Data Scientist Associate Snowflake- Snowpro core- Data Engineer Databricks Data Engineering Qualification Experience:3.5 -5 years of experience is required Educational Qualification:Graduation (Accurate educational details should capture)
Posted 5 days ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
The demand for professionals with expertise in Spark is on the rise in India. Spark, an open-source distributed computing system, is widely used for big data processing and analytics. Job seekers in India looking to explore opportunities in Spark can find a variety of roles in different industries.
These cities have a high concentration of tech companies and startups actively hiring for Spark roles.
The average salary range for Spark professionals in India varies based on experience level: - Entry-level: INR 4-6 lakhs per annum - Mid-level: INR 8-12 lakhs per annum - Experienced: INR 15-25 lakhs per annum
Salaries may vary based on the company, location, and specific job requirements.
In the field of Spark, a typical career progression may look like: - Junior Developer - Senior Developer - Tech Lead - Architect
Advancing in this career path often requires gaining experience, acquiring additional skills, and taking on more responsibilities.
Apart from proficiency in Spark, professionals in this field are often expected to have knowledge or experience in: - Hadoop - Java or Scala programming - Data processing and analytics - SQL databases
Having a combination of these skills can make a candidate more competitive in the job market.
As you explore opportunities in Spark jobs in India, remember to prepare thoroughly for interviews and showcase your expertise confidently. With the right skills and knowledge, you can excel in this growing field and advance your career in the tech industry. Good luck with your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2