Home
Jobs

3786 Hadoop Jobs - Page 31

Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
Filter
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

5.0 - 10.0 years

25 - 35 Lacs

Hyderabad, Pune, Bengaluru

Work from Office

Naukri logo

Job Description Data Engineer/Lead Required Minimum Qualifications Bachelors degree in computer science, CIS, or related field 5-10 years of IT experience in software engineering or related field Experience on project(s) involving the implementation of software development life cycles (SDLC) Primary Skills : PySpark, SQL, GCP EcoSystem(Biq Query, Cloud Composer, DataProc) Design and develop data-ingestion frameworks, real-time processing solutions, and data processing and transformation frameworks leveraging open source tools and data processing frameworks. Hands-on on technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python, Hadoop Platform, Hive, airflow Experience in GCP Cloud Composer, Big Query, DataProc Offer system support as part of a support rotation with other team members. Operationalize open-source data-analytic tools for enterprise use. Ensure data governance policies are followed by implementing or validating data lineage, quality checks, and data classification. Understand and follow the company development lifecycle to develop, deploy and deliver.

Posted 1 week ago

Apply

3.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Description Oracle Cloud Infrastructure (OCI) is a strategic growth area for Oracle. It is a comprehensive cloud service offering in the enterprise software industry, spanning Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). OCI is currently building a future-ready Gen2 cloud Data Science service platform. At the core of this platform, lies Cloud AI Cloud Service. What OCI AI Cloud Services are: A set of services on the public cloud, that are powered by ML and AI to meet the Enterprise modernization needs, and that work out of the box. These services and models can be easily specialized for specific customers/domains by demonstrating existing OCI services. Key Points: Enables customers to add AI capabilities to their Apps and Workflows easily via APIs or Containers, Useable without needing to build AI expertise in-house and Covers key gaps – Decision Support, NLP, for Public Clouds and Enterprise in NLU, NLP, Vision and Conversational AI. You’re Opportunity: As we innovate to provide a single collaborative ML environment for data-science professionals, we will be extremely happy to have you join us and share the very future of our Machine Learning platform - by building an AI Cloud service. We are addressing exciting challenges at the intersection of artificial intelligence and innovative cloud infrastructure. We are building cloud services in Computer vision for Image/Video and Document Analysis, Decision Support (Anomaly Detection, Time series forecasting, Fraud detection, Content moderation, Risk prevention, predictive analytics), Natural Language Processing (NLP), and, Speech that works out of the box for enterprises. Our product vision includes the ability for enterprises to be able to customize the services for their business and train them to specialize in their data by creating micro models that enhance the global AI models. What You’ll Do Develop scalable infrastructure, including microservices and a backend, that automates training, deployment, and optimization of ML model inference. Building a core of Artificial Intelligence and AI services such as Vision, Speech, Language, Decision, and others. Brainstorm and design various POCs using AI Perpetual AI Services for new or existing enterprise problems. Collaborate with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively communicating your needs, understanding theirs, and addressing external and internal shareholder product challenges. Lead research and development efforts to explore new tools, frameworks, and methodologies to improve backend development processes. Experiment with ML models in Python/C++ using machine learning libraries (Pytorch, ONNX, TensorRT, Triton, TensorFlow, Jax), etc. Leverage Cloud technology – Oracle Cloud (OCI), AWS, GCP, Azure, or similar technology. Qualifications Master’s degree or equivalent experience (preferred) in computer science, Statistics or Mathematics, artificial intelligence, machine learning, Computer vision, operations research, or related technical field. 3+ years for PhD or equivalent experience, 5+ years for Masters, or demonstrated ability designing, implementing, and deploying machine learning models in production environments. Practical experience in design, implementation, and production deployment of distributed systems using microservices architecture and APIs using common frameworks like Spring Boot (Java), etc. Practical experience working in a cloud environment: Oracle Cloud (OCI), AWS, GCP, Azure, and containerization (Docker, Kubernetes). Working knowledge of current techniques, approaches, and inference optimization strategies in machine learning models. Experience with performance tuning, scalability, and load balancing techniques. Expert in at least one high-level language such as Java/C++ (Java preferred). Expert in at least one scripting language such as Python, Javascript, and Shell . Deep understanding of data structures, and algorithms, and excellent problem-solving skills. Experience or willingness to learn and work in Agile and iterative development and DevOps processes. Strong drive to learn and master new technologies and techniques. You enjoy a fast-paced work environment. Additional Preferred Qualifications Experience with Cloud Native Frameworks tools and products is a plus Experience in Computer vision tasks like Image Classification, Object Detection, Segmentation, Text detection & recognition, Information extraction from documents, etc. Having an impressive set of GitHub projects or contributions to open-source technologies is a plus Hands-on experience with horizontally scalable data stores such as Hadoop and other NoSQL technologies like Cassandra is a plus. Our vision is to provide an immersive AI experience on Oracle Cloud. Aggressive as it might sound, our growth journey is fueled by highly energetic, technology-savvy engineers like YOU who are looking to grow with us to meet the demands of building a powerful next-generation platform. Are you ready to do something big? Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Job Description Who We Are At Goldman Sachs, we connect people, capital and ideas to help solve problems for our clients. We are a leading global financial services firm providing investment banking, securities and investment management services to a substantial and diversified client base that includes corporations, financial institutions, governments and individuals. Controls Engineering – Global Banking & Markets - Analyst Controls engineering is responsible for building the next generation firm-wide control plane for our front office desks. The successful candidate will use their deep technical skills to inform the implementation of a highly scalable message driven architecture, processing ~3bn messages per day and making ‘safe to trade’ determinations in real time. The role will also involve building out web applications that allow users to register, develop and administer controls on the platform. Role Overview This role offers the opportunity to work in a competitive & nimble team of implement high performance code using open-source libraries. Candidates will work directly with a variety of stakeholders, including the product managers and Global Banking & Markets risk managers to improve our controls data platform. The team is in London and India focuses on the control data solution for Global Banking & Markets Operational Risk and delivering new features. Use data to guide decision-making, developing or enhancing tools as necessary to collect it. Understand market rules, regulations, exchange service offerings, front to back business functions and build systems to facilitate them. Communication with traders, sales, clients and compliance officers about new systems, feature requests, explanation of existing features etc. Job Duties Delivering and designing new features for Control Solutions Team. Investigate incidents to review and redesign existing flows to improve platform stability. Contribute to SDLC documentation and guidance including templates, patterns, and controls. Actively participate as a member of a global team on larger development projects, assume responsibilities of components of global projects, depending on need Collaborate with engineering leadership, developers, and operations through written and verbal presentations. Minimum Education Requirements Bachelor’s degree Computer Science, Information Technology, or a related field. Minimum Years Experience Required One (1) year of experience in the job offered or in a related software engineering or full-stack software engineering position Special Skills Or Licences To Perform This Job Prior employment must include one (1) year of experience with: Working with software engineering principles and practices Working knowledge of at least 1 High Level Programming Language like Java or Python Working knowledge of algorithms, data structures and enterprise applications Formulating clear and concise written and verbal descriptions of Software and System for engineering stakeholders and tracking and managing delivery of the same Preferred Qualifications Experience with Kubernetes deployment architectures Experience in distributed systems (Kafka, Flink) Experience in micro services architecture Experience with NoSQL (Mongo, Elastic, Hadoop), in memory (MEMSQL, Ignite), cloud (Snowflake) and relational (DB2, SybaseIQ) data store solutions. Experience with UI technologies like React, Javascript Experience with NoSQL (Mongo, Elastic, Hadoop), in memory (MEMSQL, Ignite) and relational (Sybase, DB2, SybaseIQ) data store solutions Goldman Sachs Engineering Culture At Goldman Sachs, our Engineers don’t just make things – we make things possible. Change the world by connecting people and capital with ideas. Solve the most challenging and pressing engineering problems for our clients. Join our engineering teams that build massively scalable software and systems, architect low latency infrastructure solutions, proactively guard against cyber threats, and leverage machine learning alongside financial engineering to continuously turn data into action. Create new businesses, transform finance, and explore a world of opportunity at the speed of markets. Engineering is at the critical center of our business, and our dynamic environment requires innovative strategic thinking and immediate, real solutions. Want to push the limit of digital possibilities? Start here! © The Goldman Sachs Group, Inc., 2025. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender Identity. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

The Digital Solutions and Innovation (DSI) team within the Citi Internal Audit Innovation function is looking for a Business Analytics Analyst (Officer) to join the Internal Audit Analytics Team. The Analytics Team works with members of Internal Audit to identify opportunities, design, develop and implement analytics in support or the performance of audit activities, along with automation activities to promote efficiencies and expand coverage . The candidate must be proficient in the development and use of analytics technology and tools to provide analytical insight and automated solutions to enhance audit efficiency and effectiveness and have functional knowledge of banking processes and related risks and controls. Key Responsibilities: Participating in the innovative use of audit analytics through direct participation in all phases of audits Supporting the defining of data needs, designing, and executing audit analytics during audits in accordance with the audit methodology and professional standards. Supports execution of automated routines to help focus audit testing. Executes innovation solutions and pre-defined analytics in accordance with standard A&A procedures. Assisting audit teams in performing moderately complex audits related to a specific area of the bank: Consumer Banking, Investment Banking, Risk, Finance, Compliance, and/or Technology. Provide support to other members of the Analytics and Automation team, and wider Digital Solutions and Innovation team. Strong verbal and written communication skills to clearly articulate analytics requirements and results. Develop professional relationships with audit teams to assist in the definition of analytics and automation opportunities. Develop effective working relationships with technology and business teams of the area being audited, to facilitate understanding or processes and sourcing of data. Promoting continuous improvement in all aspects of audit automation activities (e.g., technical environment, software, operating procedures). Key Qualifications And Competencies At least 3 years of business / audit analyst experience in providing analytical techniques and automated solutions to business needs. Work experience in global environment and in large company. Excellent technical, programming and databases skills Excellent analytical ability to understand business processes and related risks and controls and develop innovative audit analytics based upon audit needs. Strong interpersonal and multicultural skills for interfacing with all levels of internal and external audit and management. Self-driven, problem-solving approach. Understanding of procedures and following these to keep quality and security of processes. Detail oriented approach, consistently performing diligent self-reviews of work product, and attention to data completeness and accuracy. Data literate, with the ability to understand and effectively communicate what data means to technical and non-technical stakeholders. Proficiency in one or more of the following technical skills is required : SQL Python Hadoop ecosystem (Hive, Sqoop, PySpark etc). Alteryx Proficiency in at least one of the following Data Visualization tools is a plus: Tableau MicroStrategy Cognos Experience of the following areas would be a plus: Business Intelligence including use of statistics, data modelling, data mining and predictive analytics. Application of data science tools and techniques to advance the insights obtained through the interrogation of data. Working with non-structured data such as PDF files. Banking Businesses (i.e., Institutional Clients Group, Consumer, Corporate Functions) or areas of expertise (i.e. Anti-Money Laundering, Regulatory Reporting) Big Data analysis including big data dedicated use like HUE, Hive Project Management / Solution Development Life Cycle Exposure to Process mining software such as Celonis What we offer: A chance to develop in a highly innovative environment where you can use the newest technologies in a top-quality organizational culture. Professional development in a truly global environment Inclusive and friendly corporate culture where gender diversity and equality is widely recognized A supportive workplace for professionals returning to the office from childcare leave An enjoyable and challenging learning path, which leads to a deep understanding of Citi’s products and services. Yearly discretionary bonus and competitive social benefits (private medical care, multisport, life insurance, award-winning pension scheme, holiday allowance, flexible working schedule and other) This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Decision Management ------------------------------------------------------ Job Family: Business Analysis ------------------------------------------------------ Time Type: ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

End-to-end development and delivery of MIS reports and dashboards supporting credit card and lending portfolio acquisition, early engagement, existing customer management, rewards, retention and attrition. Partner with business stakeholders to understand requirements and deliver actionable insights through automated reporting solutions. Maintain and optimize existing SAS-based reporting processes while leading the migration to Python/PySpark on Big Data platforms. Design and build interactive dashboards in Tableau for senior leadership and regulatory reporting. Build and implement an automated audit framework to ensure data accuracy, completeness and consistency across the entire reporting life cycle. Collaborate with Data Engineering and IT teams to leverage data lakes and enterprise data platforms. Mentor junior analysts and contribute to knowledge sharing across teams. Support ad-hoc analysis and audits with quick turnaround and attention to data integrity. Qualifications: Experience in MIS reporting, data analytics, or BI in the Banking/Financial Services, with a strong focus on Credit Cards. Proficiency in SAS for data extraction, manipulation, and automation. Advanced skills in Python and PySpark, particularly in big data environments (e.g., Hadoop, Hive, Databricks). Expertise in Tableau for dashboard design and data storytelling. ------------------------------------------------------ Job Family Group: Management Development Programs ------------------------------------------------------ Job Family: Undergraduate ------------------------------------------------------ Time Type: ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Welcome to Warner Bros. Discovery… the stuff dreams are made of. Who We Are… When we say, “the stuff dreams are made of,” we’re not just referring to the world of wizards, dragons and superheroes, or even to the wonders of Planet Earth. Behind WBD’s vast portfolio of iconic content and beloved brands, are the storytellers bringing our characters to life, the creators bringing them to your living rooms and the dreamers creating what’s next… From brilliant creatives, to technology trailblazers, across the globe, WBD offers career defining opportunities, thoughtfully curated benefits, and the tools to explore and grow into your best selves. Here you are supported, here you are celebrated, here you can thrive. Analytics Engineer II - Hyderabad, India . About Warner Bros. Discovery Warner Bros. Discovery, a premier global media and entertainment company, offers audiences the world's most differentiated and complete portfolio of content, brands and franchises across television, film, streaming and gaming. The new company combines Warner Media’s premium entertainment, sports and news assets with Discovery's leading non-fiction and international entertainment and sports businesses. For more information, please visit www.wbd.com. Roles & Responsibilities As a Analytics Engineer II, you will perform data analytics and data visualization-related efforts for the Data & Analytics organization at WBD. You’re an engineer who not only understands how to use big data in answering complex business questions but also how to design semantic layers to best support self-service vehicles. You will manage projects from requirements gathering to planning to implementation of full-stack data solutions (pipelines to data tables to visualizations) with the support of the larger team. You will work closely with cross-functional partners to ensure that business logic is properly represented in the semantic layer and production environments, where it can be used by the wider Data & Analytics team to drive business insights and strategy. Design and implement data models that support flexible querying and data visualization Partner with stakeholders to understand business questions and build out advanced analytical solutions Advance automation efforts that help the team spend less time manipulating & validating data and more time analyzing Build frameworks that multiply the productivity of the team and are intuitive for other data teams to leverage Participate in the creation and support of analytics development standards and best practices Create systematic solutions for solving data anomalies: identifying, alerting, and root cause analysis Work proactively with stakeholders to understand the business need and build data analytics capabilities – especially in large enterprise use cases Identify and explore new opportunities through creative analytical and engineering methods What To Bring Bachelor's degree, MS or greater in a quantitative field of study (Computer/Data Science, Engineering, Mathematics, Statistics, etc.) 3+ years of relevant experience in business intelligence/data engineering Expertise in writing SQL (clean, fast code is a must) and in data-warehousing concepts such as star schemas, slowly changing dimensions, ELT/ETL, and MPP databases Experience in transforming flawed/changing data into consistent, trustworthy datasets Experience with general-purpose programming (e.g. Python, Scala or other) dealing with a variety of data structures, algorithms, and serialization formats will be a plus Experience with big-data technologies (e.g. Spark, Hadoop, Snowflake etc) Advanced ability to build reports and dashboards with BI tools (such as Looker, Tableau or PowerBI) Experience with analytics tools such as Athena, Redshift, BigQuery or Snowflake Proficiency with Git (or similar version control) and CI/CD best practices will be a plus Ability to write clear, concise documentation and to communicate generally with a high degree of precision. Ability to solve ambiguous problems independently Ability to manage multiple projects and time constraints simultaneously Care for the quality of the input data and how the processed data is ultimately interpreted and used Prior experience in large enterprise use cases such as Sales Analytics, Financial Analytics, or Marketing Analytics Strong written and verbal communication skills Characteristics & Traits Naturally inquisitive, critical thinker, proactive problem-solver, and detail-oriented Positive attitude and an open mind Strong organizational skills with the ability to act independently and responsibly Self-starter, comfortable initiating projects from design to execution with minimal supervision Ability to manage and balance multiple (and sometimes competing) priorities in a fast-paced, complex business environment and can manage time effectively to consistently meet deadlines Team player and relationship builder What We Offer A Great Place to work. Equal opportunity employer Fast track growth opportunities How We Get Things Done… This last bit is probably the most important! Here at WBD, our guiding principles are the core values by which we operate and are central to how we get things done. You can find them at www.wbd.com/guiding-principles/ along with some insights from the team on what they mean and how they show up in their day to day. We hope they resonate with you and look forward to discussing them during your interview. Championing Inclusion at WBD Warner Bros. Discovery embraces the opportunity to build a workforce that reflects a wide array of perspectives, backgrounds and experiences. Being an equal opportunity employer means that we take seriously our responsibility to consider qualified candidates on the basis of merit, regardless of sex, gender identity, ethnicity, age, sexual orientation, religion or belief, marital status, pregnancy, parenthood, disability or any other category protected by law. If you’re a qualified candidate with a disability and you require adjustments or accommodations during the job application and/or recruitment process, please visit our accessibility page for instructions to submit your request. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Company Description Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com. Job Description We are looking for a skilled Test Engineer with experience in automated testing, rollback testing, and continuous integration environments. You will be responsible for ensuring the quality and reliability of our software products through automated testing strategies and robust test frameworks Design and execute end-to-end test strategies for data pipelines, ETL/ELT jobs, and database systems. Validate data quality, completeness, transformation logic, and integrity across distributed data systems (e.g., Hadoop, Spark, Hive). Develop Python-based automated test scripts to validate data flows, schema validations, and business rules. Perform complex SQL queries to verify large datasets across staging and production environments. Identify data issues and work closely with data engineers to resolve discrepancies. Contribute to test data management, environment setup, and regression testing processes. Work collaboratively with data engineers, business analysts, and QA leads to ensure accurate and timely data delivery. Participate in sprint planning, reviews, and defect triaging as part of the Agile process. Qualifications 4+ of experience in Data Testing, Big Data Testing, and/or Database Testing. Strong programming skills in Python for automation and scripting. Expertise in SQL for writing complex queries and validating large datasets. Experience with Big Data technologies such as Hadoop, Hive, Spark, HDFS, Kafka (any combination is acceptable). Hands-on experience with ETL/ELT testing and validating data transformations and pipelines. Exposure to cloud data platforms like AWS (Glue, S3, Redshift), Azure (Data Lake, Synapse), or GCP is a plus. Familiarity with test management and defect tracking tools like JIRA, TestRail, or Zephyr. Experience with CI/CD pipelines and version control (e.g., Git) is an advantage. Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

7 - 12 Lacs

Bengaluru

Work from Office

Naukri logo

Job Summary Synechron is seeking a dedicated and technically skilled Hadoop Shell Scripting Engineer to manage and optimize our Hadoop ecosystem. The role involves developing automation utilities, troubleshooting complex issues, and collaborating with vendors for platform enhancements. Your expertise will directly support enterprise data processing, performance tuning, and cloud migration initiatives, ensuring reliable and efficient data infrastructure that aligns with organizational goals. Software Requirements Required Skills: Strong proficiency in UNIX Shell scripting with hands-on experience in developing automation utilities In-depth understanding of Hadoop architecture and ecosystem components (HDFS, Hive, Spark) Experience with SQL querying and database systems Familiarity with Git and enterprise version control practices Working knowledge of DevOps and CI/CD tools and processes Preferred Skills: Experience with Python scripting for automation and utility development Knowledge of Java programming language Familiarity with cloud platforms (AWS, Azure, GCP) related to Hadoop ecosystem support Exposure to Hadoop vendor support and collaboration processes (e.g., Cloudera) Overall Responsibilities Develop, maintain, and enhance scripts and utilities to automate Hadoop cluster management and data processing tasks Serve as the Level 3 point of contact for issues related to Hadoop and Spark platforms Perform performance tuning and capacity planning to support enterprise data workloads Conduct proof-of-concept tests for emerging technologies and evaluate their suitability for cloud migration projects Collaborate with vendor support teams and internal stakeholders for issue resolution, feature requests, and platform improvements Review and validate all changes going into production to ensure stability and performance Continuously analyze process inefficiencies and develop new automation utilities to enhance productivity Assist in capacity management and performance monitoring activities Technical Skills (By Category) Programming Languages: Essential: UNIX Shell scripting Preferred: Python, Java Databases & Data Management: Essential: Knowledge of SQL querying and database systems Preferred: Experience with Hive, HDFS Cloud Technologies: Preferred: Basic familiarity with cloud platforms for Hadoop ecosystem support and migration Frameworks & Libraries: Not specifically applicable; focus on scripting and platform tools Development Tools & Methodologies: Essential: Git, version control, DevOps practices, CI/CD pipelines Preferred: Automation frameworks, monitoring tools Security Protocols: Not explicitly specified but familiarity with secure scripting and data access controls is advantageous Experience Requirements Minimum of 5+ years of hands-on experience working with Hadoop clusters and scripting in UNIX shell Proven experience in managing enterprise Hadoop/Spark environments Experience in performance tuning, capacity planning, and utility development Exposure to cloud migrations or proof-of-concept evaluations is a plus Background in data engineering or platform support roles preferred Day-to-Day Activities Develop and enhance UNIX shell scripts for Hadoop automation and utility management Troubleshoot and resolve complex platform issues as the Level 3 point of contact Work with application teams to optimize queries and data workflows Engage with vendor support teams for platform issues and feature requests Perform system performance reviews, capacity assessments, and tuning activities Lead initiatives for process automation, efficiency improvement, and new technology evaluations Document procedures, scripts, and platform configurations Participate in team meetings, provide technical feedback, and collaborate across teams on platform health Qualifications Educational Requirements: Bachelor's degree in Computer Science, Information Technology, or related field Equivalent professional experience in data engineering, platform support, or Hadoop administration Certifications (Preferred): Certificates in Hadoop ecosystem, Linux scripting, or cloud platform certifications Training & Professional Development: Ongoing learning related to big data platforms, automation, and cloud migration Professional Competencies Strong analytical and troubleshooting skills Excellent written and verbal communication skills Proven ability to work independently with minimal supervision Collaborative team player with a positive attitude Ability to prioritize tasks effectively and resolve issues swiftly Adaptability to evolving technologies and environments Focus on quality, security, and process improvement

Posted 1 week ago

Apply

0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Job Description Controls engineering is responsible for building the next generation firm-wide control plane for our front office desks. The successful candidate will use their deep technical skills to inform the implementation of a highly scalable message driven architecture, processing ~3bn messages per day and making ‘safe to trade’ determinations in real time. The role will also involve building out web applications that allow users to register, develop and administer controls on the platform. Role Overview This role offers the opportunity to work in a competitive & nimble team of implement high performance code using open-source libraries. Candidates will work directly with a variety of stakeholders, including the product managers and Global Banking & Markets risk managers to improve our controls data platform. The team is in London and India focuses on the control data solution for Global Banking & Markets Operational Risk and delivering new features. Use data to guide decision-making, developing or enhancing tools as necessary to collect it. Understand market rules, regulations, exchange service offerings, front to back business functions and build systems to facilitate them. Communication with traders, sales, clients and compliance officers about new systems, feature requests, explanation of existing features etc. Bar raise solution design and ensure development best practices are followed within delivery teams. Job Duties Delivering and designing new features for Control Solutions Team. Investigate incidents to review and redesign existing flows to improve platform stability. Contribute to SDLC documentation and guidance including templates, patterns, and controls. Actively participate as a member of a global team on larger development projects, assume responsibilities of components of global projects, depending on need Collaborate with engineering leadership, developers, and operations through written and verbal presentations. Minimum Education Requirements/Degree And Field Bachelor’s degree Computer Science, Information Technology, or a related field. Minimum Years Experience Required Six (6) years of experience in the job offered or in a related data engineering, software engineering or full-stack software engineering position. Special Skills And/Or Licenses Required To Perform The Job Prior employment must include six (6) year of experience with: Working with software engineering principles and practices Working knowledge of at least 2 High Level Programming Languages like Java or Python Working knowledge of algorithms, data structures and enterprise applications Formulating clear and concise written and verbal descriptions of Software and System for engineering stakeholders and tracking and managing delivery of the same Strong communication skills and the ability to work in a team. Strong analytical and problem-solving skills. Ability to solve high performance engineering problems in a language agnostic manner. Preferred Qualifications Experience with Kubernetes deployment architectures Experience building trading controls within an investment bank. Experience in distributed systems (Kafka, Flink) Experience with UI technologies like React, Javascript Experience in micro services architecture Experience with NoSQL (Mongo, Elastic, Hadoop), in memory (MEMSQL, Ignite), cloud (Snowflake) and relational (DB2, SybaseIQ) data store solutions. Experience in data driven performance analysis and optimizations. About Goldman Sachs At Goldman Sachs, we commit our people, capital and ideas to help our clients, shareholders and the communities we serve to grow. Founded in 1869, we are a leading global investment banking, securities and investment management firm. Headquartered in New York, we maintain offices around the world. We believe who you are makes you better at what you do. We're committed to fostering and advancing diversity and inclusion in our own workplace and beyond by ensuring every individual within our firm has a number of opportunities to grow professionally and personally, from our training and development opportunities and firmwide networks to benefits, wellness and personal finance offerings and mindfulness programs. Learn more about our culture, benefits, and people at GS.com/careers. We’re committed to finding reasonable accommodations for candidates with special needs or disabilities during our recruiting process. Learn more: https://www.goldmansachs.com/careers/footer/disability-statement.html © The Goldman Sachs Group, Inc., 2024. All rights reserved. Goldman Sachs is an equal employment/affirmative action employer Female/Minority/Disability/Veteran/Sexual Orientation/Gender IdentityABOUT GOLDMAN SACHS Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to Oracle Cloud Infrastructure (OCI), Oracle Cloud Applications (SaaS) & Oracle Enterprise Applications. This position is for CSS Architecture Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a Principal Data Science & AIML Engineer within the CSS CDO Architecture & Platform team, you’ll lead efforts in designing and building scalable, distributed, resilient services that provide artificial intelligence and machine learning capabilities on OCI & Oracle Cloud Applications for the business. You will be responsible for the design and development of machine learning systems and applications, ensuring they meet the needs of our clients and align with the company's strategic objectives. The ideal candidate will have extensive experience in machine learning algorithms, model creation and evaluation, data engineering and data processing for large scale distributed systems, and software development methodologies. We strongly believe in ownership and challenging the status quo. We expect you to bring critical thinking and long-term design impact while building solutions and products defining system integrations, and cross-cutting concerns. Being part of the architecture function also provides you with the unique ability to enforce new processes and design patterns that will be future-proof while building new services or products. As a thought leader, you will own and lead the complete SDLC from Architecture Design, Development, Test, Operational Readiness, and Platform SRE. Responsibilities As a member of the architecture team, you will be in charge of designing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. As a core member of the Architecture Chapter, you will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and act in capacity as an advisor to the team(s) within the software and AIML domain. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Core Responsibilities Lead the development of machine learning models, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Required Qualifications Bachelor's or Master's Degree in Computer Science, Machine Learning/AI, or a closely related field. 6+ years of experience in software development, machine learning, data science, and data engineering design. Proven ability to build and manage enterprise-distributed and/or cloud-native systems. Broad knowledge of cutting-edge machine learning models and strong domain expertise in both traditional and deep learning, particularly in areas such as Recommendation Engines, NLP & Transformers, Computer Vision, and Generative AI. Advanced proficiency in Python and frameworks such as FastAPI, Dapr & Flask or equivalent. Deep experience with ML frameworks such as PyTorch, TensorFlow, and Scikit-learn. Hands-on experience building ML models from scratch, transfer learning, and Retrieval Augmented Generation (RAG) using various techniques (Native, Hybrid, C-RAG, Graph RAG, Agentic RAG, and Multi-Agent RAG). Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, Bedrock, Vertex, Agent Development Kit, Model Context Protocol (MCP)and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Microservice architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Excellent analytical, problem-solving, communication, and leadership skills. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 week ago

Apply

5.0 - 8.0 years

7 - 10 Lacs

Mumbai, Nagpur, Thane

Work from Office

Naukri logo

First HM needs strong java candidates (5-8 years exp) who can pick up Scala on the job Need profiles asap Requisition ID P tracker ID HM Location Skill Level YTR YTR Feng Chen Bangalore Strong Java/ Strong Java Scala Level 3 Second - Needs SQL/Python resources as described below- This requirement is been filled. Requisition ID P tracker ID HM Location Skill Level YTR YTR Akshay Deodhar Bangalore RDBMS Python Level 2 HM has updated the job description. Please note you can submit two different skill set profiles If you find all these skills set in one profile will be extraordinary. If not , Please Look for Java Scala or Strong Java Developer. & other profile of SQL /Python Developer. First HM needs strong java candidates (5-8 years exp) who can pick up Scala on the job. Second - Needs SQL/Python resources as described below- Sharing the JD: Exposure in RDBMS platform (writing SQLs, Stored procedure, Data warehousing concepts etc..) Hands-on in Python. Good to have big data exposure (Spark and Hadoop concepts) Good to have Azure cloud level exposure (Databricks or Snowflake). Overall job experience of 3-6 years should be fine.

Posted 1 week ago

Apply

2.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

What you’ll do Perform general application development activities, including unit testing, code deployment to development environment and technical documentation. Works on one or more projects, making contributions to unfamiliar code written by team members. Participates in estimation process, use case specifications, reviews of test plans and test cases, requirements, and project planning. Diagnose and resolve performance issues. Documents code/processes so that any other developer is able to dive in with minimal effort. Develop, and operate high scale applications from the backend to UI layer, focusing on operational excellence, security and scalability. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.). Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit engineering team employing agile software development practices. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Able to write, debug, and troubleshoot code in mainstream open source technologies. Lead effort for Sprint deliverables, and solve problems with medium complexity What Experience You Need Bachelor's degree or equivalent experience 2+ years relevant experience working with software design and Java, Python and Javascript programming languages 2+ years experience with software build management tools like Maven or Gradle 2+ years experience with HTML, CSS and frontend/web development 2+ years experience with software testing, performance, and quality engineering techniques and strategies 2+ years experience with Cloud technology: GCP, AWS, or Azure What Could Set You Apart: Knowledge or experience with Apache Beam for stream and batch data processing. Familiarity with big data tools and technologies like Apache Kafka, Hadoop, or Spark. Experience with containerization and orchestration tools (e.g., Docker, Kubernetes). Exposure to data visualization tools or platforms. Show more Show less

Posted 1 week ago

Apply

5.0 years

0 Lacs

Trivandrum, Kerala, India

On-site

Linkedin logo

What You’ll Do Design, develop, and operate high scale applications across the full engineering stack. Design, develop, test, deploy, maintain, and improve software. Apply modern software development practices (serverless computing, microservices architecture, CI/CD, infrastructure-as-code, etc.) Work across teams to integrate our systems with existing internal systems, Data Fabric, CSA Toolset. Participate in technology roadmap and architecture discussions to turn business requirements and vision into reality. Participate in a tight-knit, globally distributed engineering team. Triage product or system issues and debug/track/resolve by analyzing the sources of issues and the impact on network, or service operations and quality. Research, create, and develop software applications to extend and improve on Equifax Solutions. Manage sole project priorities, deadlines, and deliverables. Collaborate on scalability issues involving access to data and information. Actively participate in Sprint planning, Sprint Retrospectives, and other team activity What Experience You Need Bachelor's degree or equivalent experience 5+ years of software engineering experience 5+ years experience writing, debugging, and troubleshooting code in mainstream Java, SpringBoot, TypeScript/JavaScript, HTML, CSS 5+ years experience with Cloud technology: GCP, AWS, or Azure 5+ years experience designing and developing cloud-native solutions 5+ years experience designing and developing microservices using Java, Spring Framework, GCP SDKs, GKE/Kubernetes 5+ years experience deploying and releasing software using Jenkins CI/CD pipelines, understand infrastructure-as-code concepts, Helm Charts, and Terraform constructs Big Data Technologies : Spark/Scala/Hadoop What could set you apart Experience designing and developing big data processing solutions using DataProc, Dataflow/Apache Beam, Bigtable, BigQuery, PubSub, GCS, Composer/Airflow, and others Cloud Certification especially in GCP Self-starter that identifies/responds to priority shifts with minimal supervision. You have excellent leadership and motivational skills You have an inquisitive and innovative mindset with a shown ability to recognize opportunities to create distinctive value You can successfully evaluate workload to drive efficiency Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Noida, Uttar Pradesh, India

On-site

Linkedin logo

Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to Oracle Cloud Infrastructure (OCI), Oracle Cloud Applications (SaaS) & Oracle Enterprise Applications. This position is for CSS Architecture Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a Principal Data Science & AIML Engineer within the CSS CDO Architecture & Platform team, you’ll lead efforts in designing and building scalable, distributed, resilient services that provide artificial intelligence and machine learning capabilities on OCI & Oracle Cloud Applications for the business. You will be responsible for the design and development of machine learning systems and applications, ensuring they meet the needs of our clients and align with the company's strategic objectives. The ideal candidate will have extensive experience in machine learning algorithms, model creation and evaluation, data engineering and data processing for large scale distributed systems, and software development methodologies. We strongly believe in ownership and challenging the status quo. We expect you to bring critical thinking and long-term design impact while building solutions and products defining system integrations, and cross-cutting concerns. Being part of the architecture function also provides you with the unique ability to enforce new processes and design patterns that will be future-proof while building new services or products. As a thought leader, you will own and lead the complete SDLC from Architecture Design, Development, Test, Operational Readiness, and Platform SRE. Responsibilities As a member of the architecture team, you will be in charge of designing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. As a core member of the Architecture Chapter, you will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and act in capacity as an advisor to the team(s) within the software and AIML domain. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Core Responsibilities Lead the development of machine learning models, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Required Qualifications Bachelor's or Master's Degree in Computer Science, Machine Learning/AI, or a closely related field. 6+ years of experience in software development, machine learning, data science, and data engineering design. Proven ability to build and manage enterprise-distributed and/or cloud-native systems. Broad knowledge of cutting-edge machine learning models and strong domain expertise in both traditional and deep learning, particularly in areas such as Recommendation Engines, NLP & Transformers, Computer Vision, and Generative AI. Advanced proficiency in Python and frameworks such as FastAPI, Dapr & Flask or equivalent. Deep experience with ML frameworks such as PyTorch, TensorFlow, and Scikit-learn. Hands-on experience building ML models from scratch, transfer learning, and Retrieval Augmented Generation (RAG) using various techniques (Native, Hybrid, C-RAG, Graph RAG, Agentic RAG, and Multi-Agent RAG). Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, Bedrock, Vertex, Agent Development Kit, Model Context Protocol (MCP)and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Microservice architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Excellent analytical, problem-solving, communication, and leadership skills. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

0 Lacs

Pune, Bengaluru

Hybrid

Naukri logo

Job Summary We are seeking a highly skilled Hadoop Developer / Lead Data Engineer to join our data engineering team based in Bangalore or Pune. The ideal candidate will have extensive experience with Hadoop ecosystem technologies and cloud-based big data platforms, particularly on Google Cloud Platform (GCP). This role involves designing, developing, and maintaining scalable data ingestion, processing, and transformation frameworks to support enterprise data needs. Minimum Qualifications Bachelor's degree in computer science, Computer Information Systems, or related technical field. 5-10 years of experience in software engineering or data engineering, with a strong focus on big data technologies. Proven experience in implementing software development life cycles (SDLC) in enterprise environments. Technical Skills & Expertise Big Data Technologies: Expertise in Hadoop platform, Hive , and related ecosystem tools. Strong experience with Apache Spark (using SQL, Scala, and/or Java). Experience with real-time data streaming using Kafka . Programming Languages & Frameworks: Proficient in PySpark and SQL for data processing and transformation. Strong coding skills in Python . Cloud Technologies (Google Cloud Platform): Experience with BigQuery for data warehousing and analytics. Familiarity with Cloud Composer (Airflow) for workflow orchestration. Hands-on with DataProc for managed Spark and Hadoop clusters. Responsibilities Design, develop, and implement scalable data ingestion and transformation pipelines using Hadoop and GCP services. Build real-time and batch data processing solutions leveraging Spark, Kafka, and related technologies. Ensure data quality, governance, and lineage by implementing automated validation and classification frameworks. Collaborate with cross-functional teams to deploy and operationalize data analytics tools at enterprise scale. Participate in production support and on-call rotations to maintain system reliability. Follow established SDLC practices to deliver high-quality, maintainable solutions. Preferred Qualifications Experience leading or mentoring data engineering teams. Familiarity with CI/CD pipelines and DevOps best practices for big data environments. Strong communication skills with an ability to collaborate across teams.

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

About the Role: As a Lead Data Scientist, you will independently execute specialized data tasks, focusing on model development, data interpretation, and driving forward the data science agenda. You will leverage advanced algorithms and statistical techniques to unearth insights and guide business decisions. This position is suited for self-driven professionals who excel at transforming raw data into strategic assets and are ready to contribute significantly to data science projects. Responsibilities: Lead the development and deployment of advanced machine learning models. Perform in-depth data analysis to identify actionable insights. Develop and maintain complex data processing pipelines. Collaborate with stakeholders to align data science initiatives with business goals. Drive feature engineering and selection processes. Design and implement scalable data solutions for analytics. Conduct exploratory data analysis to explore new opportunities. Ensure the robustness and reliability of data science projects. Provide guidance on best practices for data science workflows. Stay ahead of trends and continuously improve technical skills and knowledge. Skills: Advanced Statistical Methods: Proficient in applying complex statistical techniques. Machine Learning Expertise: In-depth knowledge of machine learning algorithms and their practical applications. Python/R/SAS: Advanced skills in Python, with knowledge in R or SAS for data analysis. Big Data Technologies: Familiarity with big data tools like Spark and Hadoop. Data Engineering: Proficiency in building and managing data pipelines. Predictive Modeling: Expertise in developing and fine-tuning predictive models. Communication: Excellent ability to translate data insights into business strategies. Project Management: Strong project management skills to oversee data initiatives. Applicants may be required to appear onsite at a Wolters Kluwer office as part of the recruitment process. Show more Show less

Posted 1 week ago

Apply

5.0 - 10.0 years

10 - 20 Lacs

Pune, Chennai, Bengaluru

Hybrid

Naukri logo

Our client is Global IT Service & Consulting Organization Exp-5+ yrs Skil Apache Spark Location- Bangalore, Hyderabad, Pune, Chennai, Coimbatore, Gr. Noida Excellent Knowledge on Spark; The professional must have a thorough understanding Spark framework, Performance Tuning etc Excellent Knowledge and hands-on experience of at least 4+ years in Scala or PySpark Excellent Knowledge of the Hadoop eco System- Knowledge of Hive mandatory Strong Unix and Shell Scripting Skills Excellent Inter-personal skills and for experienced candidates Excellent leadership skills Mandatory for anyone to have Good knowledge of any of the CSPs like Azure,AWS or GCP; Certifications on Azure will be additional Pl

Posted 1 week ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Job Title: Systems Engineer Location: Pune, India About This Role Comscore their Product Operations team is hiring a Systems Engineer to monitor and maintain the health of the processing environment, support by troubleshooting applications, and provide process certification on both Linux and Windows Environments. This role requires monitoring applications that run in various environments through internal dashboards, responsibilities for identifying issues from the internal dashboards, performing first level triage of the issues, and escalating as necessary until the issue is resolved. Problem solving will consume a large portion of time in this role, as it requires assisting multiple teams with resolving issues daily. Technical and analytical skills will be required for this role. Responsibilities will increase over time as experience is gained to continue supporting production processing and improving the efficiency of the processing environment and applications running in it. What You’ll Do Tier 1 Monitoring and Escalation support for internal systems and environments Working with BigData processing environments (Hadoop, AWS, Greenplum, Windows) to monitor system health with respect to job processing volume Execute custom requests and ad-hoc jobs in a timely manner and validate the results Communicating and collaborating with teams around the globe in multiple countries Automating manual steps, alerts, and optimizing existing processes for execution efficiency What You’ll Need Bachelor’s degree in computer science or related field Understanding of Windows and Linux Environment Should be able to write and understand basic SQL Queries. Knowledge of any scripting language (preferably Python) Windows & Linux command line experience, basic knowledge of XML Knowledge of JIRA & GIT Ability to follow complex and detailed instructions Sense of action and assertiveness to resolve issues Strong data analysis and problem-solving skills Excellent written and verbal communications skills Shift Timing 24/7 rotational shifts Benefits Medical Insurance coverage is provided to our employees and their dependants, 100% covered by Comscore; Provident Fund is borne by Comscore, and is provided over and above the gross salary to employees; 26 Annual leave days per annum, divided into 8 Casual leave days and 18 Privilege leave days; Comscore also provides a paid “Recharge Week” over the Christmas and New Year period, so that you can start the new year fresh; In addition, you will be entitled to: 10 Public Holidays; 10 Sick leave days; 5 Paternity leave days; 1 Birthday leave day. Flexible work arrangements; “Summer Hours” are offered from March to May: Comscore offers employees the flexibility to work more hours from Monday to Thursday, and the hours can be offset on Friday from 2:00pm onwards; Employees are eligible to participate in Comscore’s Sodexo Meal scheme and enjoy tax benefits; About Comscore At Comscore, we’re pioneering the future of cross-platform media measurement, arming organizations with the insights they need to make decisions with confidence. Central to this aim are our people who work together to simplify the complex on behalf of our clients & partners. Though our roles and skills are varied, we’re united by our commitment to five underlying values: Integrity, Velocity, Accountability, Teamwork, and Servant Leadership. If you’re motivated by big challenges and interested in helping some of the largest and most important media properties and brands navigate the future of media, we’d love to hear from you. Comscore (NASDAQ: SCOR) is a trusted partner for planning, transacting and evaluating media across platforms. With a data footprint that combines digital, linear TV, over-the-top and theatrical viewership intelligence with advanced audience insights, Comscore allows media buyers and sellers to quantify their multiscreen behavior and make business decisions with confidence. A proven leader in measuring digital and set-top box audiences and advertising at scale, Comscore is the industry’s emerging, third-party source for reliable and comprehensive cross-platform measurement. To learn more about Comscore, please visit Comscore.com. Show more Show less

Posted 1 week ago

Apply

2.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Description Amazon Music is awash in data! To help make sense of it all, the DISCO (Data, Insights, Science & Optimization) team: (i) enables the Consumer Product Tech org make data driven decisions that improve the customer retention, engagement and experience on Amazon Music. We build and maintain automated self-service data solutions, data science models and deep dive difficult questions that provide actionable insights. We also enable measurement, personalization and experimentation by operating key data programs ranging from attribution pipelines, northstar weblabs metrics to causal frameworks. (ii) delivering exceptional Analytics & Science infrastructure for DISCO teams, fostering a data-driven approach to insights and decision making. As platform builders, we are committed to constructing flexible, reliable, and scalable solutions to empower our customers. (iii) accelerates and facilitates content analytics and provides independence to generate valuable insights in a fast, agile, and accurate way. This domain provides analytical support for the below topics within Amazon Music: Programming / Label Relations / PR / Stations / Livesports / Originals / Case & CAM. DISCO team enables repeatable, easy, in depth analysis of music customer behaviors. We reduce the cost in time and effort of analysis, data set building, model building, and user segmentation. Our goal is to empower all teams at Amazon Music to make data driven decisions and effectively measure their results by providing high quality, high availability data, and democratized data access through self-service tools. If you love the challenges that come with big data then this role is for you. We collect billions of events a day, manage petabyte scale data on Redshift and S3, and develop data pipelines using Spark/Scala EMR, SQL based ETL, Airflow and Java services. We are looking for talented, enthusiastic, and detail-oriented Data Engineer, who knows how to take on big data challenges in an agile way. Duties include big data design and analysis, data modeling, and development, deployment, and operations of big data pipelines. You'll help build Amazon Music's most important data pipelines and data sets, and expand self-service data knowledge and capabilities through an Amazon Music data university. DISCO team develops data specifically for a set of key business domains like personalization and marketing and provides and protects a robust self-service core data experience for all internal customers. We deal in AWS technologies like Redshift, S3, EMR, EC2, DynamoDB, Kinesis Firehose, and Lambda. Your team will manage the data exchange store (Data Lake) and EMR/Spark processing layer using Airflow as orchestrator. You'll build our data university and partner with Product, Marketing, BI, and ML teams to build new behavioural events, pipelines, datasets, models, and reporting to support their initiatives. You'll also continue to develop big data pipelines. Key job responsibilities Deep understanding of data, analytical techniques, and how to connect insights to the business, and you have practical experience in insisting on highest standards on operations in ETL and big data pipelines. With our Amazon Music Unlimited and Prime Music services, and our top music provider spot on the Alexa platform, providing high quality, high availability data to our internal customers is critical to our customer experiences. Assist the DISCO team with management of our existing environment that consists of Redshift and SQL based pipelines. The activities around these systems will be well defined via standard operation procedures (SOP) and typically involve approving data access requests, subscribing or adding new data to the environment SQL data pipeline management (creating or updating existing pipelines) Perform maintenance tasks on the Redshift cluster. Assist the team with the management of our next-generation AWS infrastructure. Tasks includes infrastructure monitoring via CloudWatch alarms, infrastructure maintenance through code changes or enhancements, and troubleshooting/root cause analysis infrastructure issues that arise, and in some cases this resource may also be asked to submit code changes based on infrastructure issues that arise. About The Team Amazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators.From personalized music playlists to exclusive podcasts,concert livestreams to artist merch,we are innovating at some of the most exciting intersections of music and culture.We offer experiences that serve all listeners with our different tiers of service:Prime members get access to all music in shuffle mode,and top ad-free podcasts,included with their membership;customers can upgrade to Music Unlimited for unlimited on-demand access to 100 million songs including millions in HD,Ultra HD,spatial audio and anyone can listen for free by downloading Amazon Music app or via Alexa-enabled devices.Join us for opportunity to influence how Amazon Music engages fans, artists,and creators on a global scale. Basic Qualifications 2+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Experience with one or more scripting language (e.g., Python, KornShell) Experience in Unix Experience in Troubleshooting the issues related to Data and Infrastructure issues. Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Knowledge of distributed systems as it pertains to data storage and computing Experience in building or administering reporting/analytics platforms Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A2838395 Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Overview The Senior Data Science Engineer will leverage advanced data science techniques to solve complex business problems, guide decision-making processes, and mentor junior team members. This role requires a combination of technical expertise in data analysis, machine learning, and project management skills. Responsibilities Data Analysis and Modeling: Analyze large-scale telecom datasets to extract actionable insights and build predictive models for network optimization and customer retention. Conduct statistical analyses to validate models and ensure their effectiveness. Machine Learning Development: Design and implement machine learning algorithms for fraud detection, churn prediction, and network failure analysis. Telecom-Specific Analytics: Apply domain knowledge to improve customer experience by analyzing usage patterns, optimizing services, and predicting customer lifetime value. ETL Processes: Develop robust pipelines for extracting, transforming, and loading telecom data from diverse sources. Collaboration: Work closely with data scientists, software engineers, and telecom experts to deploy solutions that enhance operational efficiency. Data Governance : Ensure data integrity, privacy, security and compliance with industry standards Requirements Advanced degree in Data Science, Statistics, Computer Science, or a related field. Extensive experience in data science roles with a strong focus on machine learning and statistical modeling. Proficiency in programming languages such as Python or R and strong SQL skills. Familiarity with big data technologies (e.g., Hadoop, Spark) is advantageous. Expertise in cloud platforms such as AWS or Azure. Show more Show less

Posted 1 week ago

Apply

10.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 week ago

Apply

6.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Description Oracle Customer Success Services Building on the mindset that "Who knows Oracle …. better than Oracle?" Oracle Customer Success Services assists customers with their requirements for some of the most cutting-edge applications and solutions by utilizing the strengths of more than two decades of expertise in developing mission-critical solutions for enterprise customers and combining it with cutting-edge technology to provide our customers' speed, flexibility, resiliency, and security to enable customers to optimize their investment, minimize risk, and achieve more. The business was established with an entrepreneurial mindset and supports a vibrant, imaginative, and highly varied workplace. We are free of obligations, so we'll need your help to turn it into a premier engineering hub that prioritizes quality. Why? Oracle Customer Success Services Engineering is responsible for designing, building, and managing cutting-edge solutions, services, and core platforms to support the managed cloud business including but not limited to Oracle Cloud Infrastructure (OCI), Oracle Cloud Applications (SaaS) & Oracle Enterprise Applications. This position is for CSS Architecture Team, and we are searching for the finest and brightest technologists as we begin on the road of cloud-native digital transformation. We operate under a garage culture, rely on cutting-edge technology in our daily work, and provide a highly innovative, creative, and experimental work environment. We prefer to innovate and move quickly, putting a strong emphasis on scalability and robustness. We need your assistance to build a top-tier engineering team that has a significant influence. What? As a Principal Data Science & AIML Engineer within the CSS CDO Architecture & Platform team, you’ll lead efforts in designing and building scalable, distributed, resilient services that provide artificial intelligence and machine learning capabilities on OCI & Oracle Cloud Applications for the business. You will be responsible for the design and development of machine learning systems and applications, ensuring they meet the needs of our clients and align with the company's strategic objectives. The ideal candidate will have extensive experience in machine learning algorithms, model creation and evaluation, data engineering and data processing for large scale distributed systems, and software development methodologies. We strongly believe in ownership and challenging the status quo. We expect you to bring critical thinking and long-term design impact while building solutions and products defining system integrations, and cross-cutting concerns. Being part of the architecture function also provides you with the unique ability to enforce new processes and design patterns that will be future-proof while building new services or products. As a thought leader, you will own and lead the complete SDLC from Architecture Design, Development, Test, Operational Readiness, and Platform SRE. Responsibilities As a member of the architecture team, you will be in charge of designing software products, services, and platforms, as well as creating, testing, and managing the systems and applications we create in line with the architecture patterns and standards. As a core member of the Architecture Chapter, you will be expected to advocate for the adoption of software architecture and design patterns among cross-functional teams both within and outside of engineering roles. You will also be expected to act as a mentor and act in capacity as an advisor to the team(s) within the software and AIML domain. As we push for digital transformation throughout the organization, you will constantly be expected to think creatively and optimize and harmonize business processes. Core Responsibilities Lead the development of machine learning models, integration with full stack software ecosystem, data engineering and contribute to the design strategy. Collaborate with product managers and development teams to identify software requirements and define project scopes. Develop and maintain technical documentation, including architecture diagrams, design specifications, and system diagrams. Analyze and recommend new software technologies and platforms to ensure the company stays ahead of the curve. Work with development teams to ensure software projects are delivered on time, within budget, and to the required quality standards. Provide guidance and mentorship to junior developers. Stay up-to-date with industry trends and developments in software architecture and development practices. Required Qualifications Bachelor's or Master's Degree in Computer Science, Machine Learning/AI, or a closely related field. 6+ years of experience in software development, machine learning, data science, and data engineering design. Proven ability to build and manage enterprise-distributed and/or cloud-native systems. Broad knowledge of cutting-edge machine learning models and strong domain expertise in both traditional and deep learning, particularly in areas such as Recommendation Engines, NLP & Transformers, Computer Vision, and Generative AI. Advanced proficiency in Python and frameworks such as FastAPI, Dapr & Flask or equivalent. Deep experience with ML frameworks such as PyTorch, TensorFlow, and Scikit-learn. Hands-on experience building ML models from scratch, transfer learning, and Retrieval Augmented Generation (RAG) using various techniques (Native, Hybrid, C-RAG, Graph RAG, Agentic RAG, and Multi-Agent RAG). Experience building Agentic Systems with SLMs and LLMs using frameworks like Langgraph + Langchain, AutoGen, LlamaIndex, Bedrock, Vertex, Agent Development Kit, Model Context Protocol (MCP)and Haystack or equivalent. Experience in Data Engineering using data lakehouse stacks such as ETL/ELT, and data processing with Apache Hadoop, Spark, Flink, Beam, and dbt. Experience with Data Warehousing and Lakes such as Apache Iceberg, Hudi, Delta Lake, and cloud-managed solutions like OCI Data Lakehouse. Experience in data visualization and analytics with Apache Superset, Apache Zeppelin, Oracle Analytics Cloud or similar. Hands-on experience working with various data types and storage formats, including NoSQL, SQL, Graph databases, and data serialization formats like Parquet and Arrow. Experience with real-time distributed systems using streaming data with Kafka, NiFi, or Pulsar. Strong expertise in software design concepts, patterns (e.g., 12-Factor Apps), and tools to create CNCF-compliant software with hands-on knowledge of containerization technologies like Docker and Kubernetes. Proven ability to build and deploy software applications on one or more public cloud providers (OCI, AWS, Azure, GCP, or similar). Demonstrated ability to write full-stack applications using polyglot programming with languages/frameworks like FastAPI, Python, and Golang. Experience designing API-first systems with application stacks like FARM and MERN, and technologies such as gRPC and REST. Solid understanding of Design Thinking, Test-Driven Development (TDD), BDD, and end-to-end SDLC. Experience in DevOps practices, including Kubernetes, CI/CD, Blue-Green, and Canary deployments. Experience with Microservice architecture patterns, including API Gateways, Event-Driven & Reactive Architecture, CQRS, and SAGA. Familiarity with OOP design principles (SOLID, DRY, KISS, Common Closure, and Module Encapsulation). Proven ability to design software systems using various design patterns (Creational, Structural, and Behavioral). Strong interpersonal skills and the ability to effectively communicate with business stakeholders. Demonstrated ability to drive technology adoption in AIML Solutions and CNCF software stack. Excellent analytical, problem-solving, communication, and leadership skills. Qualifications Career Level - IC4 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 week ago

Apply

3.0 years

0 Lacs

Ahmedabad, Gujarat, India

On-site

Linkedin logo

Job Description Oracle Cloud Infrastructure (OCI) is a strategic growth area for Oracle. It is a comprehensive cloud service offering in the enterprise software industry, spanning Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). OCI is currently building a future-ready Gen2 cloud Data Science service platform. At the core of this platform, lies Cloud AI Cloud Service. What OCI AI Cloud Services are: A set of services on the public cloud, that are powered by ML and AI to meet the Enterprise modernization needs, and that work out of the box. These services and models can be easily specialized for specific customers/domains by demonstrating existing OCI services. Key Points: Enables customers to add AI capabilities to their Apps and Workflows easily via APIs or Containers, Useable without needing to build AI expertise in-house and Covers key gaps – Decision Support, NLP, for Public Clouds and Enterprise in NLU, NLP, Vision and Conversational AI. You’re Opportunity: As we innovate to provide a single collaborative ML environment for data-science professionals, we will be extremely happy to have you join us and share the very future of our Machine Learning platform - by building an AI Cloud service. We are addressing exciting challenges at the intersection of artificial intelligence and innovative cloud infrastructure. We are building cloud services in Computer vision for Image/Video and Document Analysis, Decision Support (Anomaly Detection, Time series forecasting, Fraud detection, Content moderation, Risk prevention, predictive analytics), Natural Language Processing (NLP), and, Speech that works out of the box for enterprises. Our product vision includes the ability for enterprises to be able to customize the services for their business and train them to specialize in their data by creating micro models that enhance the global AI models. What You’ll Do Develop scalable infrastructure, including microservices and a backend, that automates training, deployment, and optimization of ML model inference. Building a core of Artificial Intelligence and AI services such as Vision, Speech, Language, Decision, and others. Brainstorm and design various POCs using AI Perpetual AI Services for new or existing enterprise problems. Collaborate with fellow data scientists/SW engineers to build out other parts of the infrastructure, effectively communicating your needs, understanding theirs, and addressing external and internal shareholder product challenges. Lead research and development efforts to explore new tools, frameworks, and methodologies to improve backend development processes. Experiment with ML models in Python/C++ using machine learning libraries (Pytorch, ONNX, TensorRT, Triton, TensorFlow, Jax), etc. Leverage Cloud technology – Oracle Cloud (OCI), AWS, GCP, Azure, or similar technology. Qualifications Master’s degree or equivalent experience (preferred) in computer science, Statistics or Mathematics, artificial intelligence, machine learning, Computer vision, operations research, or related technical field. 3+ years for PhD or equivalent experience, 5+ years for Masters, or demonstrated ability designing, implementing, and deploying machine learning models in production environments. Practical experience in design, implementation, and production deployment of distributed systems using microservices architecture and APIs using common frameworks like Spring Boot (Java), etc. Practical experience working in a cloud environment: Oracle Cloud (OCI), AWS, GCP, Azure, and containerization (Docker, Kubernetes). Working knowledge of current techniques, approaches, and inference optimization strategies in machine learning models. Experience with performance tuning, scalability, and load balancing techniques. Expert in at least one high-level language such as Java/C++ (Java preferred). Expert in at least one scripting language such as Python, Javascript, and Shell . Deep understanding of data structures, and algorithms, and excellent problem-solving skills. Experience or willingness to learn and work in Agile and iterative development and DevOps processes. Strong drive to learn and master new technologies and techniques. You enjoy a fast-paced work environment. Additional Preferred Qualifications Experience with Cloud Native Frameworks tools and products is a plus Experience in Computer vision tasks like Image Classification, Object Detection, Segmentation, Text detection & recognition, Information extraction from documents, etc. Having an impressive set of GitHub projects or contributions to open-source technologies is a plus Hands-on experience with horizontally scalable data stores such as Hadoop and other NoSQL technologies like Cassandra is a plus. Our vision is to provide an immersive AI experience on Oracle Cloud. Aggressive as it might sound, our growth journey is fueled by highly energetic, technology-savvy engineers like YOU who are looking to grow with us to meet the demands of building a powerful next-generation platform. Are you ready to do something big? Career Level - IC3 About Us As a world leader in cloud solutions, Oracle uses tomorrow’s technology to tackle today’s challenges. We’ve partnered with industry-leaders in almost every sector—and continue to thrive after 40+ years of change by operating with integrity. We know that true innovation starts when everyone is empowered to contribute. That’s why we’re committed to growing an inclusive workforce that promotes opportunities for all. Oracle careers open the door to global opportunities where work-life balance flourishes. We offer competitive benefits based on parity and consistency and support our people with flexible medical, life insurance, and retirement options. We also encourage employees to give back to their communities through our volunteer programs. We’re committed to including people with disabilities at all stages of the employment process. If you require accessibility assistance or accommodation for a disability at any point, let us know by emailing accommodation-request_mb@oracle.com or by calling +1 888 404 2494 in the United States. Oracle is an Equal Employment Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability and protected veterans’ status, or any other characteristic protected by law. Oracle will consider for employment qualified applicants with arrest and conviction records pursuant to applicable law. Show more Show less

Posted 1 week ago

Apply

2.0 - 5.5 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

At PwC, our people in managed services focus on a variety of outsourced solutions and support clients across numerous functions. These individuals help organisations streamline their operations, reduce costs, and improve efficiency by managing key processes and functions on their behalf. They are skilled in project management, technology, and process optimization to deliver high-quality services to clients. Those in managed service management and strategy at PwC will focus on transitioning and running services, along with managing delivery teams, programmes, commercials, performance and delivery risk. Your work will involve the process of continuous improvement and optimising of the managed services process, tools and services. Driven by curiosity, you are a reliable, contributing member of a team. In our fast-paced environment, you are expected to adapt to working with a variety of clients and team members, each presenting varying challenges and scope. Every experience is an opportunity to learn and grow. You are expected to take ownership and consistently deliver quality work that drives value for our clients and success as a team. As you navigate through the Firm, you build a brand for yourself, opening doors to more opportunities. Skills Examples of the skills, knowledge, and experiences you need to lead and deliver value at this level include but are not limited to: Apply a learning mindset and take ownership for your own development. Appreciate diverse perspectives, needs, and feelings of others. Adopt habits to sustain high performance and develop your potential. Actively listen, ask questions to check understanding, and clearly express ideas. Seek, reflect, act on, and give feedback. Gather information from a range of sources to analyse facts and discern patterns. Commit to understanding how the business works and building commercial awareness. Learn and apply professional and technical standards (e.g. refer to specific PwC tax and audit guidance), uphold the Firm's code of conduct and independence requirements. Role: Associate Tower: Data, Analytics & Specialist Managed Service Experience: 2.0 - 5.5 years Key Skills: AWS Educational Qualification: BE / B Tech / ME / M Tech / MBA Work Location: India.;l Job Description As a Associate, you will work as part of a team of problem solvers, helping to solve complex business issues from strategy to execution. PwC Professional skills and responsibilities for this management level include but are not limited to: Use feedback and reflection to develop self-awareness, personal strengths, and address development areas. Flexible to work in stretch opportunities/assignments. Demonstrate critical thinking and the ability to bring order to unstructured problems. Ticket Quality and deliverables review, Status Reporting for the project. Adherence to SLAs, experience in incident management, change management and problem management. Seek and embrace opportunities which give exposure to different situations, environments, and perspectives. Use straightforward communication, in a structured way, when influencing and connecting with others. Able to read situations and modify behavior to build quality relationships. Uphold the firm's code of ethics and business conduct. Demonstrate leadership capabilities by working, with clients directly and leading the engagement. Work in a team environment that includes client interactions, workstream management, and cross-team collaboration. Good team player, take up cross competency work and contribute to COE activities. Escalation/Risk management. Position Requirements Required Skills: AWS Cloud Engineer Job description: Candidate is expected to demonstrate extensive knowledge and/or a proven record of success in the following areas: Should have minimum 2 years hand on experience building advanced Data warehousing solutions on leading cloud platforms. Should have minimum 1-3 years of Operate/Managed Services/Production Support Experience Should have extensive experience in developing scalable, repeatable, and secure data structures and pipelines to ingest, store, collect, standardize, and integrate data that for downstream consumption like Business Intelligence systems, Analytics modelling, Data scientists etc. Designing and implementing data pipelines to extract, transform, and load (ETL) data from various sources into data storage systems, such as data warehouses or data lakes. Should have experience in building efficient, ETL/ELT processes using industry leading tools like AWS, AWS GLUE, AWS Lambda, AWS DMS, PySpark, SQL, Python, DBT, Prefect, Snoflake, etc. Design, implement, and maintain data pipelines for data ingestion, processing, and transformation in AWS. Work together with data scientists and analysts to understand the needs for data and create effective data workflows. Implementing data validation and cleansing procedures will ensure the quality, integrity, and dependability of the data. Improve the scalability, efficiency, and cost-effectiveness of data pipelines. Monitoring and troubleshooting data pipelines and resolving issues related to data processing, transformation, or storage. Implementing and maintaining data security and privacy measures, including access controls and encryption, to protect sensitive data Advanced working SQL knowledge and experience working with relational databases, query authoring (SQL) as well as working familiarity with a variety of databases Should have experience in Building and maintaining Data Governance solutions (Data Quality, Metadata management, Lineage, Master Data Management and Data security) using industry leading tools Scaling and optimizing schema and performance tuning SQL and ETL pipelines in data lake and data warehouse environments. Should have Hands-on experience with Data analytics tools like Informatica, Collibra, Hadoop, Spark, Snowflake etc. Should have Experience of ITIL processes like Incident management, Problem Management, Knowledge management, Release management, Data DevOps etc. Should have Strong communication, problem solving, quantitative and analytical abilities. Nice To Have AWS certification Managed Services- Data, Analytics & Insights Managed Service At PwC we relentlessly focus on working with our clients to bring the power of technology and humans together and create simple, yet powerful solutions. We imagine a day when our clients can simply focus on their business knowing that they have a trusted partner for their IT needs. Every day we are motivated and passionate about making our clients’ better. Within our Managed Services platform, PwC delivers integrated services and solutions that are grounded in deep industry experience and powered by the talent that you would expect from the PwC brand. The PwC Managed Services platform delivers scalable solutions that add greater value to our client’s enterprise through technology and human-enabled experiences. Our team of highly skilled and trained global professionals, combined with the use of the latest advancements in technology and process, allows us to provide effective and efficient outcomes. With PwC’s Managed Services our clients are able to focus on accelerating their priorities, including optimizing operations and accelerating outcomes. PwC brings a consultative first approach to operations, leveraging our deep industry insights combined with world class talent and assets to enable transformational journeys that drive sustained client outcomes. Our clients need flexible access to world class business and technology capabilities that keep pace with today’s dynamic business environment. Within our global, Managed Services platform, we provide Data, Analytics & Insights where we focus more so on the evolution of our clients’ Data and Analytics ecosystem. Our focus is to empower our clients to navigate and capture the value of their Data & Analytics portfolio while cost-effectively operating and protecting their solutions. We do this so that our clients can focus on what matters most to your business: accelerating growth that is dynamic, efficient and cost-effective. As a member of our Data, Analytics & Insights Managed Service team, we are looking for candidates who thrive working in a high-paced work environment capable of working on a mix of critical Data, Analytics & Insights offerings and engagement including help desk support, enhancement, and optimization work, as well as strategic roadmap and advisory level work. It will also be key to lend experience and effort in helping win and support customer engagements from not only a technical perspective, but also a relationship perspective. Show more Show less

Posted 1 week ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

Remote

Linkedin logo

Job Description Summary The Data Scientist will work in teams addressing statistical, machine learning and data understanding problems in a commercial technology and consultancy development environment. In this role, you will contribute to the development and deployment of modern machine learning, operational research, semantic analysis, and statistical methods for finding structure in large data sets. Job Description Site Overview Established in 2000, the John F. Welch Technology Center (JFWTC) in Bengaluru is GE Aerospace’s multidisciplinary research and engineering center. Pushing the boundaries of innovation every day, engineers and scientists at JFWTC have contributed to hundreds of aviation patents, pioneering breakthroughs in engine technologies, advanced materials, and additive manufacturing. Role Overview As a Data Scientist, you will be part of a data science or cross-disciplinary team on commercially-facing development projects, typically involving large, complex data sets. These teams typically include statisticians, computer scientists, software developers, engineers, product managers, and end users, working in concert with partners in GE business units. Potential application areas include remote monitoring and diagnostics across infrastructure and industrial sectors, financial portfolio risk assessment, and operations optimization. In This Role, You Will Develop analytics within well-defined projects to address customer needs and opportunities. Work alongside software developers and software engineers to translate algorithms into commercially viable products and services. Work in technical teams in development, deployment, and application of applied analytics, predictive analytics, and prescriptive analytics. Perform exploratory and targeted data analyses using descriptive statistics and other methods. Work with data engineers on data quality assessment, data cleansing and data analytics Generate reports, annotated code, and other projects artifacts to document, archive, and communicate your work and outcomes. Share and discuss findings with team members. Required Qualifications Bachelor's Degree in Computer Science or “STEM” Majors (Science, Technology, Engineering and Math) with basic experience. Desired Characteristics Expertise in one or more programming languages and analytic software tools (e.g., Python, R, SAS, SPSS). Strong understanding of machine learning algorithms, statistical methods, and data processing techniques. Exceptional ability to analyze large, complex data sets and derive actionable insights. Proficiency in applying descriptive, predictive, and prescriptive analytics to solve real-world problems. Demonstrated skill in data cleansing, data quality assessment, and data transformation. Experience working with big data technologies and tools (e.g., Hadoop, Spark, SQL). Excellent communication skills, both written and verbal. Ability to convey complex technical concepts to non-technical stakeholders and collaborate effectively with cross-functional teams Demonstrated commitment to continuous learning and staying up-to-date with the latest advancements in data science, machine learning, and related fields. Active participation in the data science community through conferences, publications, or contributions to open-source projects. Ability to thrive in a fast-paced, dynamic environment and adapt to changing priorities and requirements. Flexibility to work on diverse projects across various domains. Preferred Qualifications Awareness of feature extraction and real-time analytics methods. Understanding of analytic prototyping, scaling, and solutions integration. Ability to work with large, complex data sets and derive meaningful insights. Familiarity with machine learning techniques and their application in solving real-world problems. Strong problem-solving skills and the ability to work independently and collaboratively in a team environment. Excellent communication skills, with the ability to convey complex technical concepts to non-technical stakeholders. Domain Knowledge Demonstrated awareness of industry and technology trends in data science Demonstrated awareness of customer and stakeholder management and business metrics Leadership Demonstrated awareness of how to function in a team setting Demonstrated awareness of critical thinking and problem solving methods Demonstrated awareness of presentation skills Personal Attributes Demonstrated awareness of how to leverage curiosity and creativity to drive business impact Humble: respectful, receptive, agile, eager to learn Transparent: shares critical information, speaks with candor, contributes constructively Focused: quick learner, strategically prioritizes work, committed Leadership ability: strong communicator, decision-maker, collaborative Problem solver: analytical-minded, challenges existing processes, critical thinker Whether we are manufacturing components for our engines, driving innovation in fuel and noise reduction, or unlocking new opportunities to grow and deliver more productivity, our GE Aerospace teams are dedicated and making a global impact. Join us and help move the aerospace industry forward . Additional Information Relocation Assistance Provided: No Show more Show less

Posted 1 week ago

Apply

Exploring Hadoop Jobs in India

The demand for Hadoop professionals in India has been on the rise in recent years, with many companies leveraging big data technologies to drive business decisions. As a job seeker exploring opportunities in the Hadoop field, it is important to understand the job market, salary expectations, career progression, related skills, and common interview questions.

Top Hiring Locations in India

  1. Bangalore
  2. Mumbai
  3. Pune
  4. Hyderabad
  5. Chennai

These cities are known for their thriving IT industry and have a high demand for Hadoop professionals.

Average Salary Range

The average salary range for Hadoop professionals in India varies based on experience levels. Entry-level Hadoop developers can expect to earn between INR 4-6 lakhs per annum, while experienced professionals with specialized skills can earn upwards of INR 15 lakhs per annum.

Career Path

In the Hadoop field, a typical career path may include roles such as Junior Developer, Senior Developer, Tech Lead, and eventually progressing to roles like Data Architect or Big Data Engineer.

Related Skills

In addition to Hadoop expertise, professionals in this field are often expected to have knowledge of related technologies such as Apache Spark, HBase, Hive, and Pig. Strong programming skills in languages like Java, Python, or Scala are also beneficial.

Interview Questions

  • What is Hadoop and how does it work? (basic)
  • Explain the difference between HDFS and MapReduce. (medium)
  • How do you handle data skew in Hadoop? (medium)
  • What is YARN in Hadoop? (basic)
  • Describe the concept of NameNode and DataNode in HDFS. (medium)
  • What are the different types of join operations in Hive? (medium)
  • Explain the role of the ResourceManager in YARN. (medium)
  • What is the significance of the shuffle phase in MapReduce? (medium)
  • How does speculative execution work in Hadoop? (advanced)
  • What is the purpose of the Secondary NameNode in HDFS? (medium)
  • How do you optimize a MapReduce job in Hadoop? (medium)
  • Explain the concept of data locality in Hadoop. (basic)
  • What are the differences between Hadoop 1 and Hadoop 2? (medium)
  • How do you troubleshoot performance issues in a Hadoop cluster? (advanced)
  • Describe the advantages of using HBase over traditional RDBMS. (medium)
  • What is the role of the JobTracker in Hadoop? (medium)
  • How do you handle unstructured data in Hadoop? (medium)
  • Explain the concept of partitioning in Hive. (medium)
  • What is Apache ZooKeeper and how is it used in Hadoop? (advanced)
  • Describe the process of data serialization and deserialization in Hadoop. (medium)
  • How do you secure a Hadoop cluster? (advanced)
  • What is the CAP theorem and how does it relate to distributed systems like Hadoop? (advanced)
  • How do you monitor the health of a Hadoop cluster? (medium)
  • Explain the differences between Hadoop and traditional relational databases. (medium)
  • How do you handle data ingestion in Hadoop? (medium)

Closing Remark

As you navigate the Hadoop job market in India, remember to stay updated on the latest trends and technologies in the field. By honing your skills and preparing diligently for interviews, you can position yourself as a strong candidate for lucrative opportunities in the big data industry. Good luck on your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies