Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
2.0 - 5.0 years
0 Lacs
Greater Chennai Area
On-site
Job Description Lead and mentor a team of data scientists/analysts. Provide analytical insights by analyzing various types of data, including mining our customer data, review of relevant cases/samples, and incorporation of feedback from others. Work closely with business partners and stakeholders to determine how to design analysis, testing, and measurement approaches that will significantly improve our ability to understand and address emerging business issues. Produce intelligent, scalable, and automated solutions by leveraging Data Science skills. Work closely with Technology teams on the development of new capabilities to define requirements and priorities based on data analysis and business knowledge. Developing expertise in specific areas by leading analytical projects independently, while setting goals, providing benefit estimations, defining workflows, and coordinating timelines in advance. Providing updates to leadership, peers, and other stakeholders that will simplify and clarify complex concepts and the results of analyses effectively, with emphasis on the actionable outcomes and impact on business. Requirements 2 to 5 years in advanced analytics, statistical modelling, and machine learning. Best practice knowledge in credit risk - strong understanding of the full lifecycle from origination to debt collection. Well-versed with ML algos, BIG data concepts, and cloud implementations. High proficiency in Python and SQL/NoSQL. Collections and Digital Channels are a plus. Strong organizational skills and excellent follow-through. Outstanding written, verbal, and interpersonal communication skills. High emotional intelligence, a can-do mentality, and a creative approach to problem solving. Takes personal ownership, Self-starter - ability to drive projects with minimal guidance and focus on high-impact work. Learns continuously; Seeks out knowledge, ideas, and feedback. Look for opportunities to build one's skills, knowledge, and expertise. Experience with big data and cloud computing, viz. Spark, Hadoop (MapReduce, PIG, HIVE) Experience in risk and credit score domains preferred. (ref:hirist.tech)
Posted 1 week ago
0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Responsible for designing, developing, and optimizing data processing solutions using a combination of Big Data technologies. Focus on building scalable and efficient data pipelines for handling large datasets and enabling batch & real-time data streaming and processing. Responsibilities: > Develop Spark applications using Scala or Python (Pyspark) for data transformation, aggregation, and analysis. > Develop and maintain Kafka-based data pipelines: This includes designing Kafka Streams, setting up Kafka Clusters, and ensuring efficient data flow. > Create and optimize Spark applications using Scala and PySpark: They leverage these languages to process large datasets and implement data transformations and aggregations. > Integrate Kafka with Spark for real-time processing: They build systems that ingest real-time data from Kafka and process it using Spark Streaming or Structured Streaming. > Collaborate with data teams: This includes data engineers, data scientists, and DevOps, to design and implement data solutions. > Tune and optimize Spark and Kafka clusters: Ensuring high performance, scalability, and efficiency of data processing workflows. > Write clean, functional, and optimized code: Adhering to coding standards and best practices. > Troubleshoot and resolve issues: Identifying and addressing any problems related to Kafka and Spark applications. > Maintain documentation: Creating and maintaining documentation for Kafka configurations, Spark jobs, and other processes. > Stay updated on technology trends: Continuously learning and applying new advancements in functional programming, big data, and related technologies. Proficiency in: Hadoop ecosystem big data tech stack(HDFS, YARN, MapReduce, Hive, Impala). Spark (Scala, Python) for data processing and analysis. Kafka for real-time data ingestion and processing. ETL processes and data ingestion tools Deep hands-on expertise in Pyspark, Scala, Kafka Programming Languages: Scala, Python, or Java for developing Spark applications. SQL for data querying and analysis. Other Skills: Data warehousing concepts. Linux/Unix operating systems. Problem-solving and analytical skills. Version control systems ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Applications Development ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Most Relevant Skills Please see the requirements listed above. ------------------------------------------------------ Other Relevant Skills For complementary skills, please see above and/or contact the recruiter. ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster.
Posted 1 week ago
5.0 - 8.0 years
5 - 8 Lacs
Bengaluru
Work from Office
Skills desired: Strong at SQL (Multi pyramid SQL joins) Python skills (FastAPI or flask framework) PySpark Commitment to work in overlapping hours GCP knowledge(BQ, DataProc and Dataflow) Amex experience is preferred(Not Mandatory) Power BI preferred (Not Mandatory) Flask, Pyspark, Python, Sql
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Title: Associate Career Level - C2 Introduction to role Are you ready to disrupt an industry and change lives? As an Associate, you will be at the forefront of GCC’s Sampling Allocation Service, developing and implementing analytical programs that optimize sampling distributions for AstraZeneca's branded products. This role is both managerial and hands-on, requiring proactive consultation with brand team collaborators and guiding the internal GCC Sampling team to ensure deliverables meet specifications. You'll work with brand teams to understand rules, requirements, and sampling strategies, applying your proficiency across multiple commercial datasets for proper implementation. Your efforts will feed into the sample ordering and distribution system, directly impacting our ability to develop life-changing medicines. Accountabilities In this dynamic role, you'll bring to bear your strong analytical skills and excellent communication abilities to forge effective business partnerships that drive tangible business impact. You'll continuously evaluate new quantitative analysis methods and technologies, manage sample allocation priorities across brands and therapeutic areas, allocate resources based on demand, liaise with AZ Sampling Stakeholders, and pull through analytics and coding standard methodologies. Your understanding of AZ core therapy areas and familiarity with core functions within AZ will be crucial as you lead a talented team. Essential Skills/Experience Quantitative Bachelor’s degree from an accredited college or university is required in one of the following or related fields: Engineering, Operations Research, Management Science, Economics, Statistics, Applied Math, Computer Science or Data Science. An advanced degree is preferred (Master’s, MBA or PhD). 2+ years of experience in Pharmaceutical / Biotech / Healthcare analytics or secondary data analysis. 3+ years of experience in application of advanced methods and statistical procedures on large and disparate datasets, specifically: Data Mining, Predictive Modelling algorithms. Optimisation & Simulation. 2+ years of recent experience and proficiency with Python, R, SQL and big data technology - Hadoop ecosystem (Cloudera distribution - Impala, Hive, Hbase, Spark, MapReduce etc.,). Understanding of the Veeva system and Veeva data, Alignment, Personal and Non-personal interactions and channels. Working knowledge of data visualisation – PowerBI, VBA or similar tools. Experience in MS Office products - PowerApps, Excel and PowerPoint skills required. Proficiency in manipulating and extracting insights from large longitudinal data sources, such as Claims, EMR and other patient-level data sets. Expertise in managing and analysing a range of large, secondary transactional databases is required. Statistical analysis and modelling background ML a plus Experience with IQVIA datasets as well as sales-related data sets such as targeting and alignment, HCP eligibility (blocking), and call data Experience with data visualisation methods and tools Ability to derive, summarise and communicate insights from analyses Organisation and time management skills Desirable Skills/Experience Strong leadership and interpersonal skills with demonstrated ability to work collaboratively with a significant number of business leaders and cross-functional business partners. Strong communication and influencing skills with demonstrated ability to develop and effectively present succinct, compelling reviews of independently developed analyses infused with insight and business implications/actions to be considered. Strategic and critical thinking with the ability to engage, build and maintain credibility with the Commercial Leadership Team. Strong organisational skills and time management; ability to manage a diverse range of simultaneous projects. Knowledge of the AZ brand and science is mandatory Experience using Big Data is a plus. Exposure to SPARK is desirable Should have Excellent Analytical problem-solving ability. Should be able to grasp new concepts quickly When we put unexpected teams in the same room, we unleash bold thinking with the power to inspire life-changing medicines. In-person working gives us the platform we need to connect, work at pace and challenge perceptions. That's why we work, on average, a minimum of three days per week from the office. But that doesn't mean we're not flexible. We balance the expectation of being in the office while respecting individual flexibility. Join us in our unique and ambitious world. At AstraZeneca, our work has a direct impact on patients by transforming our ability to develop life-changing medicines. We empower the business to perform at its peak by combining brand new science with leading digital technology platforms and data. Our dynamic environment offers countless opportunities to learn and grow through hackathons, exploring new technologies, and transforming roles forever. With a diversity of expertise unique to AstraZeneca, you'll dive deep into groundbreaking technology while broadening your understanding of our wider work. Ready to make a meaningful impact? Apply now to join our team! Date Posted 24-Jul-2025 Closing Date 30-Jul-2025 AstraZeneca embraces diversity and equality of opportunity. We are committed to building an inclusive and diverse team representing all backgrounds, with as wide a range of perspectives as possible, and harnessing industry-leading skills. We believe that the more inclusive we are, the better our work will be. We welcome and consider applications to join our team from all qualified candidates, regardless of their characteristics. We comply with all applicable laws and regulations on non-discrimination in employment (and recruitment), as well as work authorization and employment eligibility verification requirements.
Posted 1 week ago
5.0 - 8.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary As a Software Engineer at NetApp India’s R&D division, you will be responsible for the design, development and validation of software for Big Data Engineering across both cloud and on-premises environments. You will be part of a highly skilled technical team named NetApp Active IQ. The Active IQ DataHub platform processes over 10 trillion data points per month that feeds a multi-Petabyte DataLake. The platform is built using Kafka, a serverless platform running on Kubernetes, Spark and various NoSQL databases. This platform enables the use of advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage, and then provides the insights and actions to make it happen. We call this “actionable intelligence”. Job Requirements Design and build our Big Data Platform, and understand scale, performance and fault-tolerance • Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community. • Identify the right tools to deliver product features by performing research, POCs and interacting with various open-source forums • Work on technologies related to NoSQL, SQL and in-memory databases • Conduct code reviews to ensure code quality, consistency and best practices adherence. Technical Skills • Big Data hands-on development experience is required. • Demonstrate up-to-date expertise in Data Engineering, complex data pipeline development. • Design, develop, implement and tune distributed data processing pipelines that process large volumes of data; focusing on scalability, low -latency, and fault-tolerance in every system built. • Awareness of Data Governance (Data Quality, Metadata Management, Security, etc.) • Experience with one or more of Python/Java/Scala. • Knowledge and experience with Kafka, Storm, Druid, Cassandra or Presto is an added advantage. Education • A minimum of 5 years of experience is required. 5-8 years of experience is preferred. • A Bachelor of Science Degree in Electrical Engineering or Computer Science, or a Master Degree; or equivalent experience is required.
Posted 1 week ago
8.0 - 13.0 years
7 - 11 Lacs
Pune
Work from Office
Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Capco Locations (Bengaluru/ Chennai/ Hyderabad/ Pune/ Mumbai/ Gurugram) Band M3/M4 (8 to 14 years) Role Description: Job TitleSenior Consultant - Data Engineer Responsibilities Design, build and optimise data pipelines and ETL processes in Azure Databricks ensuring high performance, reliability, and scalability. Implement best practices for data ingestion, transformation, and cleansing to ensure data quality and integrity. Work within clients best practice guidelines as set out by the Data Engineering Lead Work with data modellers and testers to ensure pipelines are implemented correctly. Collaborate as part of a cross-functional team to understand business requirements and translate them into technical solutions. Role Requirements Strong Data Engineer with experience in Financial Services Knowledge of and experience building data pipelines in Azure Databricks Demonstrate a continual desire to implement strategic or optimal solutions and where possible, avoid workarounds or short term tactical solutions Work within an Agile team Experience/Skillset 8+ years experience in data engineering Good skills in SQL, Python and PySpark Good knowledge of Azure Databricks (understanding of delta tables, Apache Spark, Unity Catalog) Experience writing, optimizing, and analyzing SQL and PySpark code, with a robust capability to interpret complex data requirements and architect solutions Good knowledge of SDLC Familiar with Agile/Scrum ways of working Strong verbal and written communication skills Ability to manage multiple priorities and deliver to tight deadlines WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid
Posted 1 week ago
7.0 - 12.0 years
7 - 11 Lacs
Pune
Work from Office
Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients acrossbanking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. MAKE AN IMPACT JOB SUMMARY: Position Sr Consultant Location Pune / Bangalore Band M3/M4 (7 to 14 years) Role Description: Must Have Skills: Should have experience in PySpark and Scala + Spark for 4+ years (Min experience). Proficient in debugging and data analysis skills. Should have Spark experience of 4+ years Should have understanding of SDLC and Big Data Application Life Cycle Should have experience in GIT HUB and GIT commands Good to have experience in CICD tools such Jenkins and Ansible Fast problem solving and self-starter Should have experience in using Control-M and Service Now (for Incident management ) Positive attitude, good communication skills (written and verbal both), should not have mother tongue interference. WHY JOIN CAPCO You will work on engaging projects with some of the largest banks in the world, on projects that will transform the financial services industry. We offer A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients A diverse, inclusive, meritocratic culture We offer: A work culture focused on innovation and creating lasting value for our clients and employees Ongoing learning opportunities to help you acquire new skills or deepen existing expertise A flat, non-hierarchical structure that will enable you to work with senior partners and directly with clients #LI-Hybrid
Posted 1 week ago
3.0 - 7.0 years
10 - 14 Lacs
Chennai
Work from Office
Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices, Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices- Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc)- Spark Good to have Python. Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands, Experience in Concurrent design and multi-threading. Preferred technical and professional experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python.
Posted 1 week ago
5.0 - 10.0 years
22 - 27 Lacs
Bengaluru
Work from Office
Create Solution Outline and Macro Design to describe end to end product implementation in Data Platforms including, System integration, Data ingestion, Data processing, Serving layer, Design Patterns, Platform Architecture Principles for Data platform Contribute to pre-sales, sales support through RfP responses, Solution Architecture, Planning and Estimation Contribute to reusable components / asset / accelerator development to support capability development Participate in Customer presentations as Platform Architects / Subject Matter Experts on Big Data, Azure Cloud and related technologies Participate in customer PoCs to deliver the outcomes Required education Bachelor's Degree Preferred education Master's Degree Required technical and professional expertise Candidates must have experience in designing of data products providing descriptive, prescriptive, and predictive analytics to end users or other systems 10 - 15 years of experience in data engineering and architecting data platforms 5 – 8 years’ experience in architecting and implementing Data Platforms Azure Cloud Platform. 5 – 8 years’ experience in architecting and implementing Data Platforms on-prem (Hadoop or DW appliance) Experience on Azure cloud is mandatory (ADLS Gen 1 / Gen2, Data Factory, Databricks, Synapse Analytics, Azure SQL, Cosmos DB, Event hub, Snowflake), Azure Purview, Microsoft Fabric, Kubernetes, Terraform, Airflow. Experience in Big Data stack (Hadoop ecosystem Hive, HBase, Kafka, Spark, Scala PySpark, Python etc.) with Cloudera or Hortonworks Preferred technical and professional experience Exposure to Data Cataloging and Governance solutions like Collibra, Alation, Watson Knowledge Catalog, dataBricks unity Catalog, Apache Atlas, Snowflake Data Glossary etc Candidates should have experience in delivering both business decision support systems (reporting, analytics) and data science domains / use cases
Posted 1 week ago
6.0 - 10.0 years
4 - 8 Lacs
Hyderabad
Work from Office
We are looking for a skilled Senior Oracle Data Engineer to join our team at Apps Associates (I) Pvt. Ltd, with 6-10 years of experience in the IT Services & Consulting industry. Roles and Responsibility Design, develop, and implement data engineering solutions using Oracle technologies. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data pipelines and architectures. Ensure data quality, integrity, and security through data validation and testing procedures. Optimize data processing workflows for improved performance and efficiency. Troubleshoot and resolve complex technical issues related to data engineering projects. Job Requirements Strong knowledge of Oracle Data Engineering concepts and technologies. Experience with data modeling, design, and development. Proficiency in programming languages such as Java or Python. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.
Posted 1 week ago
8.0 years
30 - 38 Lacs
Gurgaon
Remote
Role: AWS Data Engineer Location: Gurugram Mode: Hybrid Type: Permanent Job Description: We are seeking a talented and motivated Data Engineer with requisite years of hands-on experience to join our growing data team. The ideal candidate will have experience working with large datasets, building data pipelines, and utilizing AWS public cloud services to support the design, development, and maintenance of scalable data architectures. This is an excellent opportunity for individuals who are passionate about data engineering and cloud technologies and want to make an impact in a dynamic and innovative environment. Key Responsibilities: Data Pipeline Development: Design, develop, and optimize end-to-end data pipelines for extracting, transforming, and loading (ETL) large volumes of data from diverse sources into data warehouses or lakes. Cloud Infrastructure Management: Implement and manage data processing and storage solutions in AWS (Amazon Web Services) using services like S3, Redshift, Lambda, Glue, Kinesis, and others. Data Modeling: Collaborate with data scientists, analysts, and business stakeholders to define data requirements and design optimal data models for reporting and analysis. Performance Tuning & Optimization: Identify bottlenecks and optimize query performance, pipeline processes, and cloud resources to ensure cost-effective and scalable data workflows. Automation & Scripting: Develop automated data workflows and scripts to improve operational efficiency using Python, SQL, or other scripting languages. Collaboration & Documentation: Work closely with data analysts, data scientists, and other engineering teams to ensure data availability, integrity, and quality. Document processes, architectures, and solutions clearly. Data Quality & Governance: Ensure the accuracy, consistency, and completeness of data. Implement and maintain data governance policies to ensure compliance and security standards are met. Troubleshooting & Support: Provide ongoing support for data pipelines and troubleshoot issues related to data integration, performance, and system reliability. Qualifications: Essential Skills: Experience: 8+ years of professional experience as a Data Engineer, with a strong background in building and optimizing data pipelines and working with large-scale datasets. AWS Experience: Hands-on experience with AWS cloud services, particularly S3, Lambda, Glue, Redshift, RDS, and EC2. ETL Processes: Strong understanding of ETL concepts, tools, and frameworks. Experience with data integration, cleansing, and transformation. Programming Languages: Proficiency in Python, SQL, and other scripting languages (e.g., Bash, Scala, Java). Data Warehousing: Experience with relational and non-relational databases, including data warehousing solutions like AWS Redshift, Snowflake, or similar platforms. Data Modeling: Experience in designing data models, schema design, and data architecture for analytical systems. Version Control & CI/CD: Familiarity with version control tools (e.g., Git) and CI/CD pipelines. Problem-Solving: Strong troubleshooting skills, with an ability to optimize performance and resolve technical issues across the data pipeline. Desirable Skills: Big Data Technologies: Experience with Hadoop, Spark, or other big data technologies. Containerization & Orchestration: Knowledge of Docker, Kubernetes, or similar containerization/orchestration technologies. Data Security: Experience implementing security best practices in the cloud and managing data privacy requirements. Data Streaming: Familiarity with data streaming technologies such as AWS Kinesis or Apache Kafka. Business Intelligence Tools: Experience with BI tools (Tableau, Quicksight) for visualization and reporting. Agile Methodology: Familiarity with Agile development practices and tools (Jira, Trello, etc.) Job Type: Permanent Pay: ₹3,000,000.00 - ₹3,800,000.00 per year Benefits: Work from home Schedule: Day shift Monday to Friday Experience: Data Engineering: 5 years (Required) AWS Elastic MapReduce (EMR): 3 years (Required) AWS : 3 years (Required) Work Location: In person
Posted 1 week ago
2.0 years
0 Lacs
India
On-site
Job Title: Data Science Trainer Company: HERE AND NOW Artificial Intelligence Research Institute Location: HERE AND NOW AI, Salem About Us At the HERE AND NOW Artificial Intelligence Research Institute, we are at the forefront of AI innovation. Our mission is to empower the next generation of AI professionals through comprehensive education, innovative AI applications, and groundbreaking research. We are looking for a passionate and experienced Data Science Trainer to join our team and help us achieve our goals. Job Description Title: Data Science Trainer Location: Salem, Tamil Nadu Job Type: Part-Time / Contract Date of Training: 28.07.2025 Experience: Minimum 2 years in Data Science, Machine Learning, or AI domain Industry: IT Training / EdTech / Technical Education About the Role We are hiring a dedicated and skilled Data Science Trainer in Salem to deliver hands-on training in Python for Data Science, Machine Learning, Big Data Analytics, and Deep Learning. If you're passionate about teaching and mentoring aspiring data scientists, this is your chance to contribute to the AI revolution. Responsibilities Deliver interactive classroom or virtual sessions covering: Data Science: Python, Pandas, NumPy, Matplotlib, Statistics, Machine Learning (Supervised & Unsupervised), Model Evaluation, Real-world Projects. Big Data Analytics: Hadoop Ecosystem (HDFS, MapReduce, Hive, Pig, Sqoop, Flume), Apache Spark, Spark SQL, Spark MLlib, NoSQL basics (MongoDB/Cassandra). Deep Learning: Neural Networks, CNN, RNN, LSTM, GANs, using TensorFlow, Keras, and Google Colab. Design and customize curriculum for beginner to intermediate learners. Facilitate real-time mini-projects, assignments, and model-building activities. Evaluate student progress and provide mentorship. Collaborate with academic and placement teams to ensure outcomes align with industry needs. Required Skills Strong understanding of Python and ML libraries (Pandas, Scikit-learn, Matplotlib, Seaborn). Proficiency in Big Data tools: Hadoop, Spark, Hive. Familiarity with Deep Learning frameworks: TensorFlow, Keras. Understanding of statistical concepts and machine learning algorithms. Excellent communication, presentation, and mentoring skills. Willingness to conduct sessions at our Salem center. Preferred Qualifications B.E./B.Tech/MCA/M.Sc in Computer Science, Data Science, or related fields. Training experience in Data Science, Big Data, or AI. Certification in Data Science/Machine Learning/Big Data preferred. Exposure to cloud tools (AWS/GCP) and BI tools (Power BI/Tableau) is a plus. Benefits Competitive salary + performance-based incentives Opportunity to be part of a growing AI research and training community Certificate of contribution for each training batch Job Types: Part-time, Fresher, Contractual / Temporary, Freelance Benefits: Flexible schedule Food provided Paid sick time Schedule: Day shift Supplemental Pay: Commission pay Performance bonus Shift allowance Experience: total work: 2 years (Required) Work Location: In person
Posted 1 week ago
3.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Responsibilities Provide technical support and troubleshooting for Big Data applications and systems built on the Hadoop ecosystem Monitor system performance, analyze logs, and identify potential issues before they impact services Collaborate with engineering teams to deploy and configure Hadoop clusters and related components Assist in maintenance and upgrades of Hadoop environments to ensure optimum performance and security Develop and maintain documentation for processes, procedures, and system configurations Implement data backup and recovery procedures to ensure data integrity and availability Participate in on-call rotations to provide after-hours support as needed Stay up to date with Hadoop technologies and support methodologies Assist in the training and onboarding of new team members and users on Hadoop best practices Requirements Bachelor's degree in Computer Science, Information Technology, or a related field 3+ years of experience in Big Data support or system administration, specifically with the Hadoop ecosystem Strong understanding of Hadoop components (HDFS, MapReduce, Hive, Pig, etc.) Experience with system monitoring and diagnostics tools Proficient in Linux/Unix commands and scripting languages (Bash, Python) Basic understanding of database technologies and data warehousing concepts Strong problem-solving skills and ability to work under pressure Excellent communication and interpersonal skills Ability to work independently as well as collaboratively in a team environment Willingness to learn new technologies and enhance skills Skills: Hadoop, spark/scala, HDFS, SQL, Unix Scripting, Data Backup, System Monitoring Benefits Competitive salary and benefits package. Opportunity to work on cutting-edge technologies and solve complex challenges. Dynamic and collaborative work environment with opportunities for growth and career advancement. Regular training and professional development opportunities.
Posted 1 week ago
6.0 - 11.0 years
15 - 25 Lacs
Bengaluru
Work from Office
Hiring Data Engineer in Bangalore with 6+ years experience in below skills: Must Have: - Big Data technologies: Hadoop, MapReduce, Spark, Kafka, Flink - Programming languages: Java/ Scala/ Python - Cloud: Azure, AWS, Google Cloud - Docker/Kubernetes Required Candidate profile - Strong in Communication Skills - Experience with relational SQL/ NoSQL databases- Postgres & Cassandra - Experience with ELK stack - Immediate Join is plus - Must be ready to work from office
Posted 1 week ago
2.0 years
0 Lacs
Aminjikarai, Chennai, Tamil Nadu
On-site
Job Title: Data Science Trainer Company: HERE AND NOW Artificial Intelligence Research Institute Location: HERE AND NOW AI, Salem About Us At the HERE AND NOW Artificial Intelligence Research Institute, we are at the forefront of AI innovation. Our mission is to empower the next generation of AI professionals through comprehensive education, innovative AI applications, and groundbreaking research. We are looking for a passionate and experienced Data Science Trainer to join our team and help us achieve our goals. Job Description Title: Data Science Trainer Location: Salem, Tamil Nadu Job Type: Part-Time / Contract Date of Training: 28.07.2025 Experience: Minimum 2 years in Data Science, Machine Learning, or AI domain Industry: IT Training / EdTech / Technical Education About the Role We are hiring a dedicated and skilled Data Science Trainer in Salem to deliver hands-on training in Python for Data Science, Machine Learning, Big Data Analytics, and Deep Learning. If you're passionate about teaching and mentoring aspiring data scientists, this is your chance to contribute to the AI revolution. Responsibilities Deliver interactive classroom or virtual sessions covering: Data Science: Python, Pandas, NumPy, Matplotlib, Statistics, Machine Learning (Supervised & Unsupervised), Model Evaluation, Real-world Projects. Big Data Analytics: Hadoop Ecosystem (HDFS, MapReduce, Hive, Pig, Sqoop, Flume), Apache Spark, Spark SQL, Spark MLlib, NoSQL basics (MongoDB/Cassandra). Deep Learning: Neural Networks, CNN, RNN, LSTM, GANs, using TensorFlow, Keras, and Google Colab. Design and customize curriculum for beginner to intermediate learners. Facilitate real-time mini-projects, assignments, and model-building activities. Evaluate student progress and provide mentorship. Collaborate with academic and placement teams to ensure outcomes align with industry needs. Required Skills Strong understanding of Python and ML libraries (Pandas, Scikit-learn, Matplotlib, Seaborn). Proficiency in Big Data tools: Hadoop, Spark, Hive. Familiarity with Deep Learning frameworks: TensorFlow, Keras. Understanding of statistical concepts and machine learning algorithms. Excellent communication, presentation, and mentoring skills. Willingness to conduct sessions at our Salem center. Preferred Qualifications B.E./B.Tech/MCA/M.Sc in Computer Science, Data Science, or related fields. Training experience in Data Science, Big Data, or AI. Certification in Data Science/Machine Learning/Big Data preferred. Exposure to cloud tools (AWS/GCP) and BI tools (Power BI/Tableau) is a plus. Benefits Competitive salary + performance-based incentives Opportunity to be part of a growing AI research and training community Certificate of contribution for each training batch Job Types: Part-time, Fresher, Contractual / Temporary, Freelance Benefits: Flexible schedule Food provided Paid sick time Schedule: Day shift Supplemental Pay: Commission pay Performance bonus Shift allowance Experience: total work: 2 years (Required) Work Location: In person
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
hyderabad, telangana
On-site
As an Algorithm Developer at Machinellium Mavericks Projects and Consultants Inc., you will be tasked with researching, developing, and implementing algorithms to address intricate computational challenges. Your role will involve testing and resolving issues at various stages of the development and implementation process. You should exhibit a keen interest in thriving within a fast-paced and dynamic startup environment. Your responsibilities will include developing cutting-edge machine learning algorithms, proficient programming in languages such as C++, Java, or Python, creating data-driven algorithms, and possessing familiarity with distributed computing and technologies like Hadoop/MapReduce. Your role will demand exceptional debugging and troubleshooting capabilities, as well as the ability to conduct innovative and impactful applied research. To excel in this position, you must possess strong communication skills, both oral and written, along with exceptional problem-solving and analytical abilities. If you are a highly motivated individual with a passion for algorithm development and a desire to contribute to solving complex computational problems, we encourage you to apply and be a part of our innovative team at Machinellium Mavericks Projects and Consultants Inc.,
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
haryana
On-site
As a Java with Hadoop Developer at Airlinq in Gurgaon, India, you will play a vital role in collaborating with the Engineering and Development teams to establish and maintain a robust testing and quality program for Airlinq's products and services. Your responsibilities will include but are not limited to: - Being part of a team focused on creating end-to-end IoT solutions using Hadoop to address various industry challenges. - Building quick prototypes and demonstrations to showcase the value of technologies such as IoT, Machine Learning, Cloud, Micro-Services, DevOps, and AI to the management. - Developing reusable components, frameworks, and accelerators to streamline the development cycle of future IoT projects. - Operating effectively with minimal supervision and guidance. - Configuring Cloud platforms for specific use-cases. To excel in this role, you should have a minimum of 3 years of IT experience with at least 2 years dedicated to working with Cloud technologies like AWS or Azure. You must possess expertise in designing and implementing highly scalable enterprise applications and establishing continuous integration environments on the targeted cloud platform. Proficiency in Java, Spring Framework, and strong knowledge of IoT principles, connectivity, security, and data streams are essential. Familiarity with emerging technologies such as Big Data, NoSQL, Machine Learning, AI, and Blockchain is also required. Additionally, you should be adept at utilizing Big Data technologies like Hadoop, Pig, Hive, and Spark, with hands-on experience in any Hadoop platform. Experience in workload migration between on-premise and cloud environments, programming with MapReduce and Spark, as well as Java (core Java), J2EE technologies, Python, Scala, Unix, and Bash Scripts is crucial. Strong analytical, problem-solving, and research skills are necessary, along with the ability to think innovatively and independently. This position requires 3-7 years of relevant work experience and is based in Gurgaon. The ideal educational background includes a B.E./B.Tech., M.E./M. Tech. in Computer Science, Electronics Engineering, or MCA.,
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Job Description YOUR IMPACT Are you passionate about developing mission-critical, high quality software solutions, using cutting-edge technology, in a dynamic environment? OUR IMPACT We are Compliance Engineering, a global team of more than 300 engineers and scientists who work on the most complex, mission-critical problems. We build and operate a suite of platforms and applications that prevent, detect, and mitigate regulatory and reputational risk across the firm. have access to the latest technology and to massive amounts of structured and unstructured data. leverage modern frameworks to build responsive and intuitive UX/UI and Big Data applications. Compliance Engi neering is looking to fill several big data software engineering roles Your first deliverable and success criteria will be the deployment, in 2025, of new complex data pipelines and surveillance models to detect inappropriate trading activity. How You Will Fulfill Your Potential As a member of our team, you will: partner globally with sponsors, users and engineering colleagues across multiple divisions to create end-to-end solutions, learn from experts, leverage various technologies including; Java, Spark, Hadoop, Flink, MapReduce, HBase, JSON, Protobuf, Presto, Elastic Search, Kafka, Kubernetes be able to innovate and incubate new ideas, have an opportunity to work on a broad range of problems, including negotiating data contracts, capturing data quality metrics, processing large scale data, building surveillance detection models, be involved in the full life cycle; defining, designing, implementing, testing, deploying, and maintaining software systems across our products. Qualifications A successful candidate will possess the following attributes: A Bachelor's or Master's degree in Computer Science, Computer Engineering, or a similar field of study. Expertise in java, as well as proficiency with databases and data manipulation. Experience in end-to-end solutions, automated testing and SDLC concepts. The ability (and tenacity) to clearly express ideas and arguments in meetings and on paper. Experience in the some of following is desired and can set you apart from other candidates : developing in large-scale systems, such as MapReduce on Hadoop/Hbase, data analysis using tools such as SQL, Spark SQL, Zeppelin/Jupyter, API design, such as to create interconnected services, knowledge of the financial industry and compliance or risk functions, ability to influence stakeholders. About Goldman Sachs Goldman Sachs is a leading global investment banking, securities and investment management firm that provides a wide range of financial services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. Founded in 1869, the firm is headquartered in New York and maintains offices in all major financial centers around the world.
Posted 1 week ago
0 years
2 - 7 Lacs
Hyderābād
On-site
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science or related technical field, or equivalent practical experience. Experience building data and AI solutions and working with technical customers. Experience designing cloud enterprise solutions and supporting customer projects to completion. Ability to communicate in English fluently to support client relationship management in this region. Preferred qualifications: Experience working with Large Language Models, data pipelines, and with data analytics, data visualization techniques. Experience with core Data ETL techniques. Experience in leveraging LLMs to deploy multimodal solutions encompassing Text, Image, Video and Voice. Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools and environments (Apache Beam, Hadoop, Spark, Pig, Hive, MapReduce, Flume). Knowledge of cloud computing, including virtualization, hosted services, multi-tenant cloud infrastructures, storage systems, and content delivery networks. About the job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. As a Cloud Engineer, you'll play a key role in ensuring that strategic customers have the best experience moving to the Google Cloud GenAI and Agentic AI suite of products. You will design and implement solutions for customer use cases, leveraging core Google products. You'll work with customers to identify opportunities to transform their business with GenAI, and deliver workshops designed to educate and empower customers to realize the full potential of Google Cloud. You will have access to Google’s technology to monitor application performance, debug and troubleshoot product issues, and address customer and partner needs. In this role, you will lead the timely execution of adopting the Google Cloud Platform solutions to the customer. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Deliver effective big data and GenAI solutions and solve complex technical customer challenges. Act as a trusted technical advisor to Google’s strategic customers. Identify new product features and feature gaps, provide guidance on existing product challenges, and collaborate with Product Managers and Engineers to influence the roadmap of Google Cloud Platform. Deliver best practices recommendations, tutorials, blog articles, and technical presentations adapting to different levels of key business and technical stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Note: By applying to this position you will have an opportunity to share your preferred working location from the following: Hyderabad, Telangana, India; Bengaluru, Karnataka, India . Minimum qualifications: Bachelor's degree in Computer Science or related technical field, or equivalent practical experience. Experience building data and AI solutions and working with technical customers. Experience designing cloud enterprise solutions and supporting customer projects to completion. Ability to communicate in English fluently to support client relationship management in this region. Preferred qualifications: Experience working with Large Language Models, data pipelines, and with data analytics, data visualization techniques. Experience with core Data ETL techniques. Experience in leveraging LLMs to deploy multimodal solutions encompassing Text, Image, Video and Voice. Knowledge of data warehousing concepts, including data warehouse technical architectures, infrastructure components, ETL/ ELT and reporting/analytic tools and environments (Apache Beam, Hadoop, Spark, Pig, Hive, MapReduce, Flume). Knowledge of cloud computing, including virtualization, hosted services, multi-tenant cloud infrastructures, storage systems, and content delivery networks. About The Job The Google Cloud Consulting Professional Services team guides customers through the moments that matter most in their cloud journey to help businesses thrive. We help customers transform and evolve their business through the use of Google’s global network, web-scale data centers, and software infrastructure. As part of an innovative team in this rapidly growing business, you will help shape the future of businesses of all sizes and use technology to connect with customers, employees, and partners. As a Cloud Engineer, you'll play a key role in ensuring that strategic customers have the best experience moving to the Google Cloud GenAI and Agentic AI suite of products. You will design and implement solutions for customer use cases, leveraging core Google products. You'll work with customers to identify opportunities to transform their business with GenAI, and deliver workshops designed to educate and empower customers to realize the full potential of Google Cloud. You will have access to Google’s technology to monitor application performance, debug and troubleshoot product issues, and address customer and partner needs. In this role, you will lead the timely execution of adopting the Google Cloud Platform solutions to the customer. Google Cloud accelerates every organization’s ability to digitally transform its business and industry. We deliver enterprise-grade solutions that leverage Google’s cutting-edge technology, and tools that help developers build more sustainably. Customers in more than 200 countries and territories turn to Google Cloud as their trusted partner to enable growth and solve their most critical business problems. Responsibilities Deliver effective big data and GenAI solutions and solve complex technical customer challenges. Act as a trusted technical advisor to Google’s strategic customers. Identify new product features and feature gaps, provide guidance on existing product challenges, and collaborate with Product Managers and Engineers to influence the roadmap of Google Cloud Platform. Deliver best practices recommendations, tutorials, blog articles, and technical presentations adapting to different levels of key business and technical stakeholders. Google is proud to be an equal opportunity workplace and is an affirmative action employer. We are committed to equal employment opportunity regardless of race, color, ancestry, religion, sex, national origin, sexual orientation, age, citizenship, marital status, disability, gender identity or Veteran status. We also consider qualified applicants regardless of criminal histories, consistent with legal requirements. See also Google's EEO Policy and EEO is the Law. If you have a disability or special need that requires accommodation, please let us know by completing our Accommodations for Applicants form .
Posted 1 week ago
8.0 years
20 - 30 Lacs
Coimbatore, Tamil Nadu, India
On-site
Company: KGIS Website: Visit Website Business Type: Enterprise Company Type: Service Business Model: Others Funding Stage: Pre-seed Industry: IT Services Salary Range: ₹ 20-30 Lacs PA Job Description About Us At KGiNVICTA, we build cutting-edge digital and analytics solutions for global enterprises, combining domain expertise with advanced technology. Join our team to lead and deliver data-driven innovations that transform business outcomes. Role Overview We are looking for a seasoned Big Data Engineer with 5–8 years of hands-on experience in designing, developing, and optimizing large-scale data processing systems. You’ll play a key role in building scalable data pipelines, driving performance tuning, and leading cloud-native big data initiatives. Key Responsibilities Design and develop robust, scalable Big Data solutions using Apache Spark and related technologies. Build batch and real-time data pipelines using Spark, Spark Streaming, Kafka, and RabbitMQ. Implement ETL processes to ingest and transform data from RDBMS, ERP systems, and file-based sources. Optimize Spark jobs for performance and scalability. Work with NoSQL technologies like HBase, Cassandra, or MongoDB. Query large datasets using tools like Hive and Impala. Ensure seamless integration of data from heterogeneous sources. Lead a team of data engineers and ensure adherence to Agile methodologies. Collaborate with cross-functional teams to deliver cloud-based solutions on AWS or Azure (Databricks preferred). Must-Have Skills Deep expertise in Apache Spark and distributed computing. Strong programming skills in Python. Solid experience with Hadoop v2, MapReduce, HDFS, Sqoop. Real-time stream processing using Apache Storm or Spark Streaming. Proficient in working with messaging systems like Kafka or RabbitMQ. SQL mastery: joins, stored procedures, query optimization, and schema design. Hands-on with NoSQL databases – HBase, Cassandra, MongoDB. Experience with cloud-native services in AWS or Azure. Strong understanding of ETL tools and performance tuning. Agile mindset and experience working in Agile/Scrum teams. Excellent problem-solving skills and team leadership experience. Nice to Have Exposure to data lake and lakehouse architectures. Familiarity with DevOps tools for CI/CD and data pipeline monitoring. Certifications in cloud or big data technologies (AWS, Azure, Databricks, etc.). Why KGiNVICTA? Work on innovative projects with Fortune 500 clients. Fast-paced, meritocratic culture with real ownership. Access to cutting-edge tools and technologies. Collaborative, growth-focused environment. 📩 Ready to take your Big Data career to the next level? Apply now and be a part of our digital transformation journey.
Posted 1 week ago
3.0 - 8.0 years
5 - 10 Lacs
Pune
Work from Office
Roles and Responsibility Design, develop, and implement scalable Kafka infrastructure solutions. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain technical documentation for Kafka infrastructure projects. Troubleshoot and resolve complex issues related to Kafka infrastructure. Ensure compliance with industry standards and best practices for Kafka infrastructure. Participate in code reviews and contribute to the improvement of the overall code quality. Job Requirements Strong understanding of Kafka architecture and design principles. Experience with Kafka tools such as Streams, KSQL, and SCADA. Proficient in programming languages such as Java, Python, or Scala. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.
Posted 1 week ago
4.0 - 7.0 years
25 - 30 Lacs
Ahmedabad
Work from Office
ManekTech is looking for Data Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs.
Posted 1 week ago
8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the Team Roku pioneered TV streaming and continues to innovate and lead the industry. Continued success relies on investing in the Roku Content Platform, so we deliver high quality streaming TV experience at a global scale. As part of our Content Platform team you join a small group of highly skilled engineers, that own significant responsibility in crafting, developing and maintaining our large-scale backend systems, data pipelines, storage, and processing services. We provide all insights in regard to all content on Roku Devices. About the Role We are looking for a Senior Software Engineer with vast experience in backend development, Data Engineering and Data Analytics to focus on building next level content platform and data intelligence, which empowers Search, Recommendation, and many more critical systems across Roku Platform. This is an excellent role for a senior professional who enjoys a high level of visibility, thrives on having a critical business impact, able to make critical decisions and is excited to work on a core data platform component which is crucial for many streaming components at Roku. What You’ll Be Doing Work closely with product management team, content data platform services, and other internal consumer teams to contribute extensively to our content data platform and underlying architecture. Build low-latency and optimized streaming and batch data pipelines to enable downstream services. Build and support our Micro-services based Event-Driven Backend Systems & Data Platform. Design and build data pipelines for batch, near-real-time, and real-time processing. Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects. We’re excited if you have 8+ years professional experience as a Software Engineer. Proficiency in Java/Scala/Python. Deep understanding of backend technologies, architecture patterns, and best practices, including microservices, RESTful APIs, message queues, caching, and databases. Strong analytical and problem-solving skills, data structures and algorithms, with the ability to translate complex technical requirements into scalable and efficient solutions. Experience with Micro-service and event-driven architectures. Experience with Apache Spark and Apache Flink. Experience with Big Data Frameworks and Tools: MapReduce, Hive, Presto, HDFS, YARN, Kafka, etc. Experience with Apache Airflow or similar workflow orchestration tooling for ETL. Experience with cloud platforms: AWS (preferred), GCP, etc. Strong communication and presentation skills. BS in Computer Science; MS in Computer Science preferred. AI literacy and curiosity.You have either tried Gen AI in your previous work or outside of work or are curious about Gen AI and have explored it. Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms.
Posted 1 week ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are looking for a Big Data Developer to build and maintain scalable data processing systems. The ideal candidate will have experience handling large datasets and working with distributed computing frameworks. Key Responsibilities: Design and develop data pipelines using Hadoop, Spark, or Flink. Optimize big data applications for performance and reliability. Integrate various structured and unstructured data sources. Work with data scientists and analysts to prepare datasets. Ensure data quality, security, and lineage across platforms. Required Skills & Qualifications: Experience with Hadoop ecosystem (HDFS, Hive, Pig) and Apache Spark. Proficiency in Java, Scala, or Python. Familiarity with data ingestion tools (Kafka, Sqoop, NiFi). Strong understanding of distributed computing principles. Knowledge of cloud-based big data services (e.g., EMR, Dataproc, HDInsight). Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
39581 Jobs | Dublin
Wipro
19070 Jobs | Bengaluru
Accenture in India
14409 Jobs | Dublin 2
EY
14248 Jobs | London
Uplers
10536 Jobs | Ahmedabad
Amazon
10262 Jobs | Seattle,WA
IBM
9120 Jobs | Armonk
Oracle
8925 Jobs | Redwood City
Capgemini
7500 Jobs | Paris,France
Virtusa
7132 Jobs | Southborough