Home
Jobs

3305 Hive Jobs - Page 8

Filter
Filter Interviews
Min: 0 years
Max: 25 years
Min: ₹0
Max: ₹10000000
Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

6.0 - 9.0 years

0 Lacs

Kolkata, West Bengal, India

On-site

Linkedin logo

At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. Position Name : Data Engineer Position Level : Senior Position Details EY’s GDS Assurance Digital team’s mission is to develop, implement and integrate technology solutions that better serve our audit clients and engagement teams. As a member of EY’s core Assurance practice, you’ll develop a deep Audit related technical knowledge and outstanding database, data analytics and programming skills. Ever-increasing regulations require audit departments to gather, organize and analyse more data than ever before. Often the data necessary to satisfy these ever-increasing and complex regulations must be collected from a variety of systems and departments throughout an organization. Effectively and efficiently handling the variety and volume of data is often extremely challenging and time consuming for a company. EY's GDS Assurance Digital team members work side-by-side with the firm's partners, clients and audit technical subject matter experts to develop and incorporate technology solutions that enhance value-add, improve efficiencies and enable our clients with disruptive and market leading tools supporting Assurance. GDS Assurance Digital provides solution architecture, application development, testing and maintenance support to the global Assurance service line both on a pro-active basis and in response to specific requests. EY is currently seeking a Big Data Developer to join the GDS Assurance Digital practice in Bangalore, India, to work on various Microsoft technology-based projects for customers across the globe. Qualifications Requirements (including experience, skills and additional qualifications) A Bachelor's degree (BE/BTech/MCA & MBA) in Computer Science, Engineering, Information Systems Management, Accounting, Finance or a related field with adequate industry experience. BE/BTech/MCA with a sound industry experience of 6 to 9 years. Technical skills requirements: Experience with SQL, NoSQL databases such as HBase/Cassandra/MongoDB Good knowledge of Big Data querying tools, such as Pig, Hive ETL Implementation any tool like Alteryx or Azure Data Factory etc Good to have experience in NiFi Experience in any one of the reporting tool like Power BI/Tableau/Spot fire is must Analytical/Decision Making Responsibilities: An ability to quickly understand complex concepts and use technology to support data modeling, analysis, visualization or process automation Selects appropriately from applicable standards, methods, tools and applications and uses accordingly Ability to work within a multi-disciplinary team structure, but also independently Demonstrates analytical and systematic approach to problem solving Communicates fluently orally and in writing and can present complex technical information to both technical and non-technical audiences Able to plan, schedule and monitor work activities in to meet time and quality targets Able to absorb rapidly new technical information, business acumen, and apply it effectively Ability to work in a team environment with strong customer focus, good listening, negotiation and problem-resolution skills Additional skills requirements: The expectations are that a Senior will be able to maintain long-term client relationships and network and cultivate business development opportunities Should have understanding and experience of software development best practices Must be a team player EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less

Posted 1 day ago

Apply

6.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

🧾 Job Title: Application Developer – Data Engineering 🕒 Experience: 4–6 Years 📅 Notice Period: Immediate to 20 Days 🔍 Job Summary: We are looking for a highly skilled Data Engineering Application Developer to join our dynamic team. You will be responsible for the design, development, and configuration of data-driven applications that align with key business processes. Your role will also include refining data workflows, optimize performance, and supporting business goals through scalable and reliable data solutions. 📌 Roles & Responsibilities: Independently develop and maintain data pipelines and ETL processes. Become a Subject Matter Expert (SME) in Data Engineering tools and practices. Collaborate with cross-functional teams to gather requirements and provide data-driven solutions. Actively participate in team discussions and contribute to problem-solving efforts. Create and maintain comprehensive technical documentation, including application specifications and user guides. Stay updated with industry best practices and continuously improve application and data processing performance. 🛠️ Professional & Technical Skills: ✅ Must-Have Skills: Proficiency in Data Engineering , PySpark , and Python Strong knowledge of ETL processes and data modeling Experience working with cloud platforms like AWS or Azure Hands-on expertise with SQL or NoSQL databases Familiarity with other programming languages such as Java ➕ Good-to-Have Skills: Knowledge of Big Data tools and frameworks (e.g., Hadoop, Hive, Kafka) Experience with CI/CD tools and DevOps practices Exposure to containerization tools like Docker or Kubernetes #DataEngineering #PySpark #PythonDeveloper #ETLDeveloper #BigDataJobs #DataEngineer #BangaloreJobs #PANIndiaJobs #AWS #Azure #SQL #NoSQL #CloudDeveloper #ImmediateJoiners #DataPipeline #Java #Kubernetes #SoftwareJobs #ITJobs #NowHiring #HiringAlert #ApplicationDeveloper #DataJobs #ITCareers #JoinOurTeam #TechJobsIndia #JobOpening #FullTimeJobs Show more Show less

Posted 1 day ago

Apply

8.0 years

0 Lacs

Mumbai, Maharashtra, India

On-site

Linkedin logo

Background Praan (Praan, Inc.) is an impact focused deep-tech startup democratizing clean air using breakthrough filterless technology. The company is backed by top tier VCs and CXOs globally and currently operates between the United States and India. Our team puts extreme attention to detail and loves building technology that's aspirational. Praan's team and culture is positioned to empower people to solve large global problems at an accelerated pace Why Everyone worries about the dooms-day in climate change which is expected to occur in the 2050s. However, there's one doom's day which is the reality for millions of people around the world today. Air pollution takes more than 7 Million lives globally every single year. Over 5% of premature children death occur due to air pollution in developing countries. Everyone has relied on governments or experts to solve the problem, but most solutions up until today have either been too expensive or too ineffective. Praan is an attempt at making the future cleaner, healthier, and safer for the generations to come Job Description Supervise, monitor, and coordinate all production activities across the HIVE and MKII assembly lines Ensure adherence to daily, weekly, and monthly production targets while maintaining product quality and minimizing downtime Implement and sustain Kaizen, 5S, and other continuous improvement initiatives to enhance line efficiency and reduce waste Overlook daily start of day and end of day inventory reporting Ensure line balancing for optimal resource utilization and minimal bottlenecks Monitor and manage manpower deployment, shift scheduling, absentee management and skill mapping to maintain productivity Drive quality standards by coordinating closely with Manufacturing Lead Track and analyze key production KPIs (OEE, yield, downtime) and initiate corrective actions Ensure adherence to SOPs, safety protocols, and compliance standards Support new product introductions (NPIs) or design changes in coordination with R&D/engineering teams Train and mentor line operators and line leaders, ensuring training, skill development, and adherence to performance standards. Monitor and report on key production metrics, including output, downtime, efficiency, scrap rates, and productivity, ensuring targets are met consistently Maintain documentation and reports related to production planning, line output, incidents, and improvements Skill Requirements Diploma/Bachelor's degree in Mechanical, Production, Electronics, Industrial Engineering, or related field 4–8 years of hands-on production supervision experience in a high-volume manufacturing environment managing the production of multiple products Proven expertise in Kaizen, Lean Manufacturing, Line Balancing, and Shop Floor Management Proven ability to manage large teams, allocate resources effectively, and meet production targets in a fast-paced, dynamic environment Experience with production planning, manpower management, and problem-solving techniques (like 5 Why, Fishbone, etc.) Strong understanding of manufacturing KPIs and process documentation Excellent leadership, communication, and conflict-resolution skills Hands-on attitude with a willingness to work on-ground Experience in automotive, consumer electronics, or similar high-volume industries Praan is an equal opportunity employer and does not discriminate based on race, religion, caste, gender, disability or any other criteria. We just care about working with great human beings! Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Teamwork makes the stream work. Roku is changing how the world watches TV Roku is the #1 TV streaming platform in the U.S., Canada, and Mexico, and we've set our sights on powering every television in the world. Roku pioneered streaming to the TV. Our mission is to be the TV streaming platform that connects the entire TV ecosystem. We connect consumers to the content they love, enable content publishers to build and monetize large audiences, and provide advertisers unique capabilities to engage consumers. From your first day at Roku, you'll make a valuable - and valued - contribution. We're a fast-growing public company where no one is a bystander. We offer you the opportunity to delight millions of TV streamers around the world while gaining meaningful experience across a variety of disciplines. About the team The mission of Roku's Data Engineering team is to develop a world-class big data platform so that internal and external customers can leverage data to grow their businesses. Data Engineering works closely with business partners and Engineering teams to collect metrics on existing and new initiatives that are critical to business success. As Senior Data Engineer working on Device metrics, you will design data models & develop scalable data pipelines to capturing different business metrics across different Roku Devices. About the role Roku pioneered streaming to the TV. We connect users to the streaming content they love, enable content publishers to build and monetise large audiences, and provide advertisers with unique capabilities to engage consumers. Roku streaming players and Roku TV™ models are available around the world through direct retail sales and licensing arrangements with TV brands and pay-TV operators.With tens of million players sold across many countries, thousands of streaming channels and billions of hours watched over the platform, building scalable, highly available, fault-tolerant, big data platform is critical for our success.This role is based in Bangalore, India and requires hybrid working, with 3 days in the office. What you'll be doing Build highly scalable, available, fault-tolerant distributed data processing systems (batch and streaming systems) processing over 10s of terabytes of data ingested every day and petabyte-sized data warehouse Build quality data solutions and refine existing diverse datasets to simplified data models encouraging self-service Build data pipelines that optimise on data quality and are resilient to poor quality data sources Own the data mapping, business logic, transformations and data quality Low level systems debugging, performance measurement & optimization on large production clusters Participate in architecture discussions, influence product roadmap, and take ownership and responsibility over new projects Maintain and support existing platforms and evolve to newer technology stacks and architectures We're excited if you have Extensive SQL Skills Proficiency in at least one scripting language, Python is required Experience in big data technologies like HDFS, YARN, Map-Reduce, Hive, Kafka, Spark, Airflow, Presto, etc. Proficiency in data modeling, including designing, implementing, and optimizing conceptual, logical, and physical data models to support scalable and efficient data architectures. Experience with AWS, GCP, Looker is a plus Collaborate with cross-functional teams such as developers, analysts, and operations to execute deliverables 5+ years professional experience as a data or software engineer BS in Computer Science; MS in Computer Science preferred Benefits Roku is committed to offering a diverse range of benefits as part of our compensation package to support our employees and their families. Our comprehensive benefits include global access to mental health and financial wellness support and resources. Local benefits include statutory and voluntary benefits which may include healthcare (medical, dental, and vision), life, accident, disability, commuter, and retirement options (401(k)/pension). Our employees can take time off work for vacation and other personal reasons to balance their evolving work and life needs. It's important to note that not every benefit is available in all locations or for every role. For details specific to your location, please consult with your recruiter. The Roku Culture Roku is a great place for people who want to work in a fast-paced environment where everyone is focused on the company's success rather than their own. We try to surround ourselves with people who are great at their jobs, who are easy to work with, and who keep their egos in check. We appreciate a sense of humor. We believe a fewer number of very talented folks can do more for less cost than a larger number of less talented teams. We're independent thinkers with big ideas who act boldly, move fast and accomplish extraordinary things through collaboration and trust. In short, at Roku you'll be part of a company that's changing how the world watches TV. We have a unique culture that we are proud of. We think of ourselves primarily as problem-solvers, which itself is a two-part idea. We come up with the solution, but the solution isn't real until it is built and delivered to the customer. That penchant for action gives us a pragmatic approach to innovation, one that has served us well since 2002. To learn more about Roku, our global footprint, and how we've grown, visit https://www.weareroku.com/factsheet. By providing your information, you acknowledge that you have read our Applicant Privacy Notice and authorize Roku to process your data subject to those terms. Show more Show less

Posted 1 day ago

Apply

7.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

There is an opportunity for HADOOP ADMIN IN Hyderabad for which WALKIN interview is there on 21ST JUN 25 between 9:00 AM TO 12:30 PM Venue : Deccan Park, Plot No.1, Hitech City Main Rd, Software Units Layout, HUDA Techno Enclave, Madhapur, Hyderabad, Telangana 500081 PLS SHARE below details to mamidi.p@tcs.com with subject line as HADOOP ADMIN 21st Jun 25 if you are interested Email id: Contact no: Total EXP: Preferred Location: CURRENT CTC: EXPECTED CTC: NOTICE PERIOD: CURRENT ORGANIZATION: HIGHEST QUALIFICATION THAT IS FULL TIME : HIGHEST QUALIFICATION UNIVERSITY: ANY GAP IN EDUCATION OR EMPLOYMENT: IF YES HOW MANY YEARS AND REASON FOR GAP: ARE U AVAILABLE FOR INTERVIEW ON 9TH JAN 25(YES/NO): We will share a mail to you by tom Night if you are shortlisted · 7+ Years of working experience in Hadoop and must have good exposure to Hive, Impala and Spark. · Exposure to and strong working knowledge on distributed systems, Yarn, cluster Size(nodes, memory) · Exposure to and strong working experience in Sqoop – Especially how to handle large volume by splitting into multiple chunks. · Must have good work experience in File to Hadoop ingestion. · Must have basic Understanding of Types of Serdes and storage(Parquet, Avro, Orc etc.) · Must have good knowledge in SQL basics - Joins, Rank, Scenario based. · Ability to grasp the ‘big picture’ of a solution by considering all potential options in impacted area. · Aptitude to understand and adapt to newer technologies. · Experience in managing and leading small development teams in an agile environment. · The ability to work with teammates in a collaborative manner to achieve a mission. · Presentation skills to prepare and present to large and small groups on technical and functional topics. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Coimbatore, Tamil Nadu, India

On-site

Linkedin logo

Exp:5+yrs NP: Imm-15 days Rounds: 3 Rounds (Virtual) Mandate Skills: Apache spark, hive, Hadoop, spark, scala, Databricks Job Description The Role Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala) 5+ years of hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Show more Show less

Posted 1 day ago

Apply

5.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Location: Hyderabad Budget: 3.5x Notice : Immediate joiners Requirements : • BS degree in computer science, computer engineering or equivalent • 5-9 years of experience delivering enterprise software solutions • Familiar with Spark, Scala, Python, AWS Cloud technologies • 2+ years of experience across multiple Hadoop / Spark technologies such as Hadoop, MapReduce, HDFS, HBase, Hive, Flume, Sqoop, Kafka, Scala • Flair for data, schema, data model, how to bring efficiency in big data related life cycle. • Experience with Agile Development methodologies. • Experience with data ingestion and transformation • Have understanding for secure application development methodologies. • Experience in with Airflow and Python will be preferred. • Understanding of automated QA needs related to Big data technology. • Strong object-oriented design and analysis skills • Excellent written and verbal communication skills Responsibilities • Utilize your software engineering skills including Spark, Python, Scala to Analyze disparate, complex systems and collaboratively design new products and services • Integrate new data sources and tools • Implement scalable and reliable distributed data replication strategies • Collaborate with other teams to design and develop and deploy data tools that support both operations and product use cases • Perform analysis of large data sets using components from the Hadoop ecosystem • Own product features from the development, testing through to production deployment • Evaluate big data technologies and prototype solutions to improve our data processing architecture • Automate different pipelines Show more Show less

Posted 1 day ago

Apply

9.0 - 14.0 years

0 Lacs

Hyderabad, Telangana, India

On-site

Linkedin logo

Greetings from TCS!!! TCS is hiring for Big Data Architect Location - PAN India Years of Experience - 9-14 years Job Description- Experience with Python, Spark, and Hive data pipelines using ETL processes Apache Hadoop development and implementation Experience with streaming frameworks such as Kafka Hands on experience in Azure/AWS/Google data services Work with big data technologies (Spark, Hadoop, BigQuery, Databricks) for data preprocessing and feature engineering. Show more Show less

Posted 1 day ago

Apply

0 years

0 Lacs

India

Remote

Linkedin logo

Required Skills: YOE-8+ Mode Of work: Remote Design, develop, modify, and test software applications for the healthcare industry in agile environment. Duties include: Develop. support/maintain and deploy software to support a variety of business needs Provide technical leadership in the design, development, testing, deployment and maintenance of software solutions Design and implement platform and application security for applications Perform advanced query analysis and performance troubleshooting Coordinate with senior-level stakeholders to ensure the development of innovative software solutions to complex technical and creative issues Re-design software applications to improve maintenance cost, testing functionality, platform independence and performance Manage user stories and project commitments in an agile framework to rapidly deliver value to customers deploy and operate software solutions using DevOps model. Required skills: Azure Deltalake, ADF, Databricks, PySpark, Oozie, Airflow, Big Data technologies( HBASE, HIVE), CI/CD (GitHub/Jenkins) Show more Show less

Posted 1 day ago

Apply

10.0 - 15.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Position- Cloudera Data Engineer Location- Chennai Notice period- 0-30 Days/ Immediately Joiners Experience-10 to 15 years Cloudera Data Engineer will likely focus on designing, building, and maintaining scalable data pipelines and platforms within the Cloudera Hadoop ecosystem. Key skills include expertise in data warehousing, ETL processes, and strong programming abilities in languages like Python and SQL. They'll also need to be proficient in Cloudera tools, including Spark, Hive, and potentially Airflow for orchestration. Thank you Show more Show less

Posted 2 days ago

Apply

7.0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

TEKsystems is seeking a Senior AWS + Data Engineer to join our dynamic team. The ideal candidate should have expertise Data engineer + Hadoop + Scala/Python with AWS services. This role involves designing, developing, and maintaining scalable and reliable software solutions. Job Title: Data Engineer – Spark/Scala (Batch Processing) Location: Manyata- Hybrid Experience: 7+yrs Type: Full-Time Mandatory Skills: 7-10 years’ experience in design, architecture or development in Analytics and Data Warehousing. Experience in building end-to-end solutions with the Big data platform, Spark or scala programming. 5 years of Solid experience in ETL pipeline building with spark or sclala programming framework with knowledge in developing UNIX Shell Script, Oracle SQL/ PL-SQL. Experience in Big data platform for ETL development with AWS cloud platform. Proficiency in AWS cloud services, specifically EC2, S3, Lambda, Athena, Kinesis, Redshift, Glue , EMR, DynamoDB, IAM, Secret Manager, Step functions, SQS, SNS, Cloud Watch. Excellent skills in Python-based framework development are mandatory. Should have experience with Oracle SQL database programming, SQL performance tuning, and relational model analysis. Extensive experience with Teradata data warehouses and Cloudera Hadoop. Proficient across Enterprise Analytics/BI/DW/ETL technologies such as Teradata Control Framework, Tableau, OBIEE, SAS, Apache Spark, Hive Analytics & BI Architecture appreciation and broad experience across all technology disciplines. Experience in working within a Data Delivery Life Cycle framework & Agile Methodology. Extensive experience in large enterprise environments handling large volume of datasets with High SLAs Good knowledge in developing UNIX scripts, Oracle SQL/PL-SQL, and Autosys JIL Scripts. Well versed in AI Powered Engineering tools like Cline, GitHub Copilo Please send the resumes to nvaseemuddin@teksystems.com or kebhat@teksystems.com Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Current scope and span of work: Summary : Need is for a data engineer to handle day-to-day activities involving data ingestion from multiple source locations, help identify data sources, to troubleshoot issues, and engage with a third-party vendor to meet stakeholders’ needs. Required Skills : Python Processing of large quantities of text documents Extraction of text from Office and PDF documents Input json to an API, output json to an API Nifi (or similar technology compatible with current EMIT practices) Basic understanding of AI/ML concepts Database/Search engine/SOLR skills SQL – build queries to analyze, create and update databases Understands the basics of hybrid search Experience working with terabytes (TB) of data Basic OpenML/Python/Azure knowledge Scripting knowledge/experience in an Azure environment to automate Cloud systems experience related to search and databases Platforms: DataBricks Snowflake ESRI ArcGIS / SDE New GenAI app being developed Scope of work : 1. Ingest TB of data from multiple sources identified by the Ingestion Lead 2. Optimize data pipelines to improve on data processing, speed, and data availability 4. Make data available for end users from several hundred LAN and SharePoint areas 5. Monitor data pipelines daily and fix issues related to scripts, platforms, and ingestion 6. Work closely with the Ingestion Lead & Vendor on issues related to data ingestion Technical Skills demonstrated: 1. SOLR - Backend database 2. Nifi - Data movement 3. Pyspark - Data Processing 4. Hive & Oozie - For jobs monitoring 5. Querying - SQL, HQl and SOLR querying 6. SQL 7. Python Behavioral Skills demonstrated: 1. Excellent communication skills 2. Ability to receive direction from a Lead and implement 3. Prior experience working in an Agile setup, preferred 4. Experience troubleshooting technical issues and quality control checking of work 5. Experience working with a globally distributed team in different Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

We are intending to hire Data engineer to handle day-to-day activities involving data ingestion from multiple source locations, help identify data sources, to troubleshoot issues, and engage with a third-party vendor to meet stakeholders’ needs. Work Location: Chennai or Hyderabad or Pune WFO. Shift hours: 2.00pm to 11.00pm IST. Required Immediate Joiners. Required Skills : Python Processing of large quantities of text documents Extraction of text from Office and PDF documents Input json to an API, output json to an API Nifi (or similar technology compatible with current EMIT practices) Basic understanding of AI/ML concepts Database/Search engine/SOLR skills SQL – build queries to analyze, create and update databases Understands the basics of hybrid search Experience working with terabytes (TB) of data Basic OpenML/Python/Azure knowledge Scripting knowledge/experience in an Azure environment to automate Cloud systems experience related to search and databases Platforms: DataBricks Snowflake ESRI ArcGIS / SDE New GenAI app being developed Scope of work : 1. Ingest TB of data from multiple sources identified by the Ingestion Lead 2. Optimize data pipelines to improve on data processing, speed, and data availability 4. Make data available for end users from several hundred LAN and SharePoint areas 5. Monitor data pipelines daily and fix issues related to scripts, platforms, and ingestion 6. Work closely with the Ingestion Lead & Vendor on issues related to data ingestion Technical Skills demonstrated: 1. SOLR - Backend database 2. Nifi - Data movement 3. Pyspark - Data Processing 4. Hive & Oozie - For jobs monitoring 5. Querying - SQL, HQl and SOLR querying 6. SQL 7. Python Behavioral Skills demonstrated: 1. Excellent communication skills 2. Ability to receive direction from a Lead and implement 3. Prior experience working in an Agile setup, preferred 4. Experience troubleshooting technical issues and quality control checking of work 5. Experience working with a globally distributed team in different Show more Show less

Posted 2 days ago

Apply

170.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Job Description Job Summary Asset Transfer - Wealth Management Ops Key Responsibilities Strategy To support and develop the Assets Transfer Team in PvB Securities Operations Hub Singapore/Hong Kong/Jersey to facilitate effective processing of transactions in a controlled environment. Business Good knowledge in dealing with Settlements and Assets Transfer in global markets and the products offer in SCB Private Banking world. Responsible for ensuring high quality of service and support to be provided to Platinum and PvB clients, primary contact person for Front Office and SCB staff for advice, enquiries and complaints relating to Assets Transfer matters. Processes Ensure handling in accordance with bank operational menu, desk instructions, compliance, and regulatory requirements. Make decisions and demonstrate problem solving skills based on internal Procedure Manual / Operational Policy Guidelines issued by the Bank. Continuous improvement in productivity. Governance Maintain strong stakeholder engagement with Front Ends (CSM/RMs/TLs), WM Country and Business, COO / Operations, T&I, Risk & Compliance and Group Internal Audit to ensure alignment across stakeholder groups to support the tribe deliverables. Escalate appropriately to ensure key stakeholders like Country Head, Hive Lead, Hive Tech Lead and Chief product owner are updated and able to intervene as required. Regulatory & Business Conduct Display exemplary conduct and live by the Group’s Values and Code of Conduct. Skills and Experience Action Oriented Collaborates Customer Focus Courage Nimble Learning Skills and Experience Degree, Advanced/Higher/Graduate Diploma in Finance/Accountancy/Banking or equivalent Relevant Product knowledge on Managed Investments, Fixed Income and Structured Products Conceptual understanding of Asset Transfer Processing workflow System awareness around T24, TLM, Bloomberg, Refinitive, eBBS. Experience of dealing with Global Banks, Custodians, Fund Houses, TA, Front End CSM/TLs. Ability to work under pressure and deliver high quality output under tight timelines Flexibility to work in shifts or during public holidays Qualifications Diploma or Degree About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What we offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. Recruitment Assessments Some of our roles use assessments to help us understand how suitable you are for the role you've applied to. If you are invited to take an assessment, this is great news. It means your application has progressed to an important stage of our recruitment process. Visit our careers website www.sc.com/careers Profile Description Standard Chartered Bank Interested candidate, email profile on snehalsunil.shinde@sc.com Show more Show less

Posted 2 days ago

Apply

12.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

The Solution Architect Big Data is a strategic professional who stays abreast of technology developments within own field and contributes to directional strategy by considering their application in own job and business. Recognized as a technical authority for an area within the business. Requires basic commercial awareness. Developed communication and diplomacy skills are required in order to guide, influence and convince others, in particular colleagues in other areas and occasional external customers. Significant impact on the area through complex deliverables. Provides advice and counsel related to the technology or operations. Work impacts an entire area, which eventually affects the overall performance and effectiveness of the sub-function/job family. Responsibilities: Executes the architectural vision for all IT systems through major, complex IT architecture projects; ensures that architecture conforms to enterprise blueprints. Develops technology road maps, while keeping up-to-date with emerging technologies, and recommends business directions based on these technologies. Provides technical leadership and is responsible for developing components of, or the overall systems design. Translates complex business problems into sound technical solutions. Applies hardware engineering and software design theories and principles in researching, designing, and developing product hardware and software interfaces. Provides integrated systems planning and recommends innovative technologies that will enhance the current system. Recommends appropriate desktop, computer platform, and communication links required to support IT goals and strategy. Exhibits good knowledge of how own specialism contributes to the business and good understanding of competitors products and services. Acts as an advisor or mentor to junior team members. Requires sophisticated analytical thought to resolve issues in a variety of complex situations. Impacts the architecture function by influencing decisions through advice, counsel or facilitating services. Guides, influences and persuades others with developed communication and diplomacy skills. Performs other job duties and functions as assigned Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 12+ years' experience in Big Data and/or Public Cloud 8 years' experience working on Big Data technologies: Hadoop, HDFS, Hive, Spark, Impala, etc. Technical Expertise in financial services industry and/or regulatory environments Excellent knowledge and experience on solutioning Cloud native solutions Experience with migrating on-prem applications to Cloud Architectures or developing cloud native applications for any of the following: AWS, Azure, GCP, OpenShift Ability to work across technology stacks and perform R&D on new technologies Proficiency in one or more programming languages like Java, Python etc. Consistently demonstrates clear and concise written and verbal communication Management and prioritization skills Ability to develop working relationships Ability to manage multiple activities and changing priorities Ability to work under pressure and to meet tight deadlines Self-starter with ability to take the initiative and master new tasks quickly Methodical, attention to detail Preference: Experience on architecting Gen AI based technology solutions is preferred Education: Bachelor’s/University degree or equivalent experience, potentially Masters degree ------------------------------------------------------ Job Family Group: Technology ------------------------------------------------------ Job Family: Architecture ------------------------------------------------------ Time Type: Full time ------------------------------------------------------ Citi is an equal opportunity and affirmative action employer. Qualified applicants will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, or status as a protected veteran. Citigroup Inc. and its subsidiaries ("Citi”) invite all qualified interested applicants to apply for career opportunities. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi . View the " EEO is the Law " poster. View the EEO is the Law Supplement . View the EEO Policy Statement . View the Pay Transparency Posting Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

Data Engineer Intern – Xiaomi India Location: Bangalore, India Duration: 6 Months Internship Eligibility: College Passed out students (B.Tech/M.Tech in CS, IT or other related fields) Xiaomi is one of the world’s leading technology companies, with a strong presence in India across smartphones, smart devices, and internet services. At Xiaomi India, data is at the core of all strategic decisions. We’re looking for passionate Data Engineer Interns to work on high-impact projects involving large-scale data systems, data modeling, and pipeline engineering to support business intelligence, analytics, and AI use cases. Key Responsibilities Assist in building scalable data pipelines using Python and SQL. Support data modeling activities for analytics and reporting use cases. Perform data cleansing, transformation, and validation using PySpark. Collaborate with data engineers and analysts to ensure high data quality and availability. Work on Hadoop ecosystem tools to process large datasets. Contribute to data documentation and maintain version-controlled scripts Technical Skills Required Strong proficiency in Python for data processing and scripting. Good knowledge of SQL – writing complex queries, joins, aggregations Understanding of Data Modeling concepts – Star/Snowflake schema, Fact/Dimension tables. Familiarity with Big Data / Hadoop ecosystem – HDFS, Hive, Spark. Basic exposure to PySpark will be a strong plus. Experience with tools like Jupyter Notebook, VS Code, or any modern IDE. Exposure to cloud platforms (AWS/Azure/GCP/Databricks) is a bonus. Soft Skills Eagerness to learn and work in a fast-paced data-driven environment. Strong analytical thinking and attention to detail. Good communication and collaboration skills. Self-starter with the ability to work independently and in teams. Show more Show less

Posted 2 days ago

Apply

10.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

This job is with Standard Chartered Bank, an inclusive employer and a member of myGwork – the largest global platform for the LGBTQ+ business community. Please do not contact the recruiter directly. Job Summary The Chapter Lead Backend development is a role is a hands-on developer role focusing on back-end development and is accountable for people management and capability development of their Chapter members. Responsibilities in detail are: Responsibilities Oversees the execution of functional standards and best practices and provide technical assistance to the members of their Chapter. Responsible for the quality of the code repository where applicable. Maintain exemplary coding standards within the team, contributing to code base development and code repository management. Perform code reviews to guarantee quality and promote a culture of technical excellence in Java development. Function as a technical leader and active coder, setting and enforcing domain-specific best practices and technology standards. Allocate technical resources and personal coding time effectively, balancing leadership with hands-on development tasks. Maintain a dual focus on leadership and hands-on development, committing code while steering the chapter's technical direction. Oversee Java backend development standards within the chapter across squads, ensuring uniform excellence and adherence to best coding practices. Harmonize Java development methodologies across the squad, guiding the integration of innovative practices that align with the bank's engineering strategies. Advocate for the adoption of cutting-edge Java technologies and frameworks, driving the evolution of backend practices to meet future challenges. Strategy Oversees the execution of functional standards and best practices and provide technical assistance to the members of their Chapter. Responsible for the quality of the code repository where applicable. Acts as a conduit for the wider domain strategy, for example technical standards. Prioritises and makes available capacity for technical debt. This role is around capability building, it is not to own applications or delivery. Actively shapes and drives towards the Bank-Wide engineering strategy and programmes to uplift standards and steer the technological direction towards excellence Act as a custodian for Java backend expertise, providing strategic leadership to enhance skill sets and ensure the delivery of high-performance banking solutions. Business Experienced practitioner and hands on contribution to the squad delivery for their craft (Eg. Engineering). Responsible for balancing skills and capabilities across teams (squads) and hives in partnership with the Chief Product Owner & Hive Leadership, and in alignment with the fixed capacity model. Responsible to evolve the craft towards improving automation, simplification and innovative use of latest market trends. Collaborate with product owners and other tech leads to ensure applications meet functional requirements and strategic objectives Processes Promote a feedback-rich environment, utilizing internal and external insights to continuously improve chapter operations. Adopt and embed the Change Delivery Standards throughout the lifecycle of the product / service. Ensure role, job descriptions and expectations are clearly set and periodic feedback provided to the entire team. Follows the chapter operating model to ensure a system exists to continue to build capability and performance of the chapter. Chapter Lead may vary based upon the specific chapter domain its leading. People & Talent Accountable for people management and capability development of their Chapter members. Reviews metrics on capabilities and performance across their area, has improvement backlog for their Chapters and drives continual improvement of their chapter. Focuses on the development of people and capabilities as the highest priority. Risk Management Responsible for effective capacity risk management across the Chapter with regards to attrition and leave plans. Ensures the chapter follows the standards with respect to risk management as applicable to their chapter domain. Adheres to common practices to mitigate risk in their respective domain. Design and uphold a robust risk management plan, with contingencies for succession and role continuity, especially in critical positions Governance Ensure all artefacts and assurance deliverables are as per the required standards and policies (e.g., SCB Governance Standards, ESDLC etc.). Regulatory & Business Conduct Ensure a comprehensive understanding of and adherence to local banking laws, anti-money laundering regulations, and other compliance mandates. Conduct business activities with a commitment to legal and regulatory compliance, fostering an environment of trust and respect. Key stakeholders Chapter Area Lead Sub-domain Tech Lead Domain Architect Business Leads / Product owners Other Responsibilities Champion the company's broader mission and values, integrating them into daily operations and team ethos. Undertake additional responsibilities as necessary, ensuring they contribute to the organisation's strategic aims and adhere to Group and other Relevant policies. Skills And Experience Hands-on Java Development Leadership in System Architecture Database Proficiency CI / CD Container Platforms - Kubernetes / OCP / Podman Qualifications Bachelor's or Master's degree in Computer Science, Computer Engineering, or related field, with preference given to advanced degrees. 10 years of professional Java development experience, including a proven record in backend system architecture and API design. At least 5 years in a leadership role managing diverse development teams and spearheading complex Java projects. Proficiency in a range of Java frameworks such as Spring, Spring Boot, and Hibernate, and an understanding of Apache Struts. Proficient in Java, with solid expertise in core concepts like object-oriented programming, data structures, and complex algorithms. Knowledgeable in web technologies, able to work with HTTP, RESTful APIs, JSON, and XML Expert knowledge of relational databases such as Oracle, MySQL, PostgreSQL, and experience with NoSQL databases like MongoDB, Cassandra is a plus. Familiarity with DevOps tools and practices, including CI/CD pipeline deployment, containerisation technologies like Docker and Kubernetes, and cloud platforms such as AWS, Azure, or GCP. Solid grasp of front-end technologies (HTML, CSS, JavaScript) for seamless integration with backend systems. Strong version control skills using tools like Git / Bitbucket, with a commitment to maintaining high standards of code quality through reviews and automated tests. Exceptional communication and team-building skills, with the capacity to mentor developers, facilitate technical skill growth, and align team efforts with strategic objectives. Strong problem-solving skills and attention to detail. Excellent communication and collaboration skills. Ability to work effectively in a fast-paced, dynamic environment. About Standard Chartered We're an international bank, nimble enough to act, big enough for impact. For more than 170 years, we've worked to make a positive difference for our clients, communities, and each other. We question the status quo, love a challenge and enjoy finding new opportunities to grow and do better than before. If you're looking for a career with purpose and you want to work for a bank making a difference, we want to hear from you. You can count on us to celebrate your unique talents and we can't wait to see the talents you can bring us. Our purpose, to drive commerce and prosperity through our unique diversity, together with our brand promise, to be here for good are achieved by how we each live our valued behaviours. When you work with us, you'll see how we value difference and advocate inclusion. Together We Do the right thing and are assertive, challenge one another, and live with integrity, while putting the client at the heart of what we do Never settle, continuously striving to improve and innovate, keeping things simple and learning from doing well, and not so well Are better together, we can be ourselves, be inclusive, see more good in others, and work collectively to build for the long term What We Offer In line with our Fair Pay Charter, we offer a competitive salary and benefits to support your mental, physical, financial and social wellbeing. Core bank funding for retirement savings, medical and life insurance, with flexible and voluntary benefits available in some locations. Time-off including annual leave, parental/maternity (20 weeks), sabbatical (12 months maximum) and volunteering leave (3 days), along with minimum global standards for annual and public holiday, which is combined to 30 days minimum. Flexible working options based around home and office locations, with flexible working patterns. Proactive wellbeing support through Unmind, a market-leading digital wellbeing platform, development courses for resilience and other human skills, global Employee Assistance Programme, sick leave, mental health first-aiders and all sorts of self-help toolkits A continuous learning culture to support your growth, with opportunities to reskill and upskill and access to physical, virtual and digital learning. Being part of an inclusive and values driven organisation, one that embraces and celebrates our unique diversity, across our teams, business functions and geographies - everyone feels respected and can realise their full potential. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Introduction A career in IBM Consulting is rooted by long-term relationships and close collaboration with clients across the globe. You'll work with visionaries across multiple industries to improve the hybrid cloud and AI journey for the most innovative and valuable companies in the world. Your ability to accelerate impact and make meaningful change for your clients is enabled by our strategic partner ecosystem and our robust technology platforms across the IBM portfolio Your Role And Responsibilities Developer leads the cloud application development/deployment. A developer responsibility is to lead the execution of a project by working with a senior level resource on assigned development/deployment activities and design, build, and maintain cloud environments focusing on uptime, access, control, and network security using automation and configuration management tools Preferred Education Master's Degree Required Technical And Professional Expertise Strong proficiency in Java, Spring Framework, Spring boot, RESTful APIs, excellent understanding of OOP, Design Patterns. Strong knowledge of ORM tools like Hibernate or JPA, Java based Micro-services framework, Hands on experience on Spring boot Microservices Strong knowledge of micro-service logging, monitoring, debugging and testing, In-depth knowledge of relational databases (e.g., MySQL) Experience in container platforms such as Docker and Kubernetes, experience in messaging platforms such as Kafka or IBM MQ, Good understanding of Test-Driven-Development Familiar with Ant, Maven or other build automation framework, good knowledge of base UNIX commands Preferred Technical And Professional Experience Experience in Concurrent design and multi-threading Primary Skills: - Core Java, Spring Boot, Java2/EE, Microservices - Hadoop Ecosystem (HBase, Hive, MapReduce, HDFS, Pig, Sqoop etc) - Spark Good to have Python Show more Show less

Posted 2 days ago

Apply

3.0 years

0 Lacs

Chennai, Tamil Nadu, India

On-site

Linkedin logo

Description As a Research Analyst, you'll collaborate with experts to develop cutting-edge ML solutions for business needs. You'll drive product pilots, demonstrating innovative thinking and customer focus. You'll build scalable solutions, write high-quality code, and develop state-of-the-art ML models. You'll coordinate between science and software teams, optimizing solutions. The role requires thriving in ambiguous, fast-paced environments and working independently with ML models. Key job responsibilities Collaborate with seasoned Applied Scientists and propose best in class ML solutions for business requirements Dive deep to drive product pilots, demonstrate think big and customer obsession LPs to steer the product roadmap Build scalable solutions in partnership with Applied Scientists by developing technical intuition to write high quality code and develop state of the art ML models utilizing most recent research breakthroughs in academia and industry Coordinate design efforts between Sciences and Software teams to deliver optimized solutions Ability to thrive in an ambiguous, uncertain and fast moving ML usecase developments. Familiar with ML models and work independent. Mentor Junior Research Analyst (RAs) and contribute to RA hiring About The Team Retail Business Services Technology (RBS Tech) team develops the systems and science to accelerate Amazon’s flywheel. The team drives three core themes: 1) Find and Fix all customer and selling partner experience (CX and SPX) defects using technology, 2) Generate comprehensive insights for brand growth opportunities, and 3) Completely automate Stores tasks. Our vision for MLOE is to achieve ML operational excellence across Amazon through continuous innovation, scalable infrastructure, and a data-driven approach to optimize value, efficiency, and reliability. We focus on key areas for enhancing machine learning operations: a) Model Evaluation: Expanding LLM-based audit platform to support multilingual and multimodal auditing. Developing an LLM-powered testing framework for conversational systems to automate the validation of conversational flows, ensuring scalable, accurate, and efficient end-to-end testing. b) Guardrails: Building common guardrail APIs that teams can integrate to detect and prevent egregious errors, knowledge grounding issues, PII breaches, and biases. c) Deployment Framework support LLM deployments and seamlessly integrate it with our release management processes. Basic Qualifications Bachelor's degree in Quantitative or STEM disciplines (Science, Technology, Engineering, Mathematics) 3+ years of relevant work experience in solving real world business problems using machine learning, deep learning, data mining and statistical algorithms Strong hands-on programming skills in Python, SQL, Hadoop/Hive. Additional knowledge of Spark, Scala, R, Java desired but not mandatory Strong analytical thinking Ability to creatively solve business problems, innovating new approaches where required and articulating ideas to a wide range of audiences using strong data, written and verbal communication skills Ability to collaborate effectively across multiple teams and stakeholders, including development teams, product management and operations. Preferred Qualifications Master's degree with specialization in ML, NLP or Computer Vision preferred 3+ years relevant work experience in a related field/s (project management, customer advocate, product owner, engineering, business analysis) Diverse experience will be favored eg. a mix of experience across different roles - In-depth understanding of machine learning concepts including developing models and tuning the hyper-parameters, as well as deploying models and building ML service - Technical expertise, experience in Data science, ML and Statistics Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A2911939 Show more Show less

Posted 2 days ago

Apply

8.0 years

0 Lacs

Bengaluru

On-site

Overview: Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities: Team: Core Engineering Reliability Team Collaborate with engineering and TPM leaders, developers, and process engineers to create data solutions that extract actionable insights from incident and post-incident management data, supporting objectives of incident prevention and reducing detection, mitigation, and communication times. Work with diverse stakeholders to understand their needs and design data models, acquisition processes, and applications that meet those requirements. Add new sources, implement business rules, and generate metrics to empower product analysts and data scientists. Serve as the data domain expert, mastering the details of our incident management infrastructure. Take full ownership of problems from ambiguous requirements through rapid iterations. Enhance data quality by leveraging and refining internal tools and frameworks to automatically detect issues. Cultivate strong relationships between teams that produce data and those that build insights. Qualifications: Minimum Qualifications / Your background: BS in Computer Science or equivalent experience with 8+ years as a Senior Data Engineer or similar role 10+ Years of progressive experience in building scalable datasets and reliable data engineering practices. Proficiency in Python, SQL, and data platforms like DataBricks Proficiency in relational databases and query authoring (SQL). Demonstrable expertise designing data models for optimal storage and retrieval to meet product and business requirements. Experience building and scaling experimentation practices, statistical methods, and tools in a large scale organization Excellence in building scalable data pipelines using Spark (SparkSQL) with Airflow scheduler/executor framework or similar scheduling tools. Expert experience working with AWS data services or similar Apache projects (Spark, Flink, Hive, and Kafka). Understanding of Data Engineering tools/frameworks and standards to improve the productivity and quality of output for Data Engineers across the team. Well versed in modern software development practices (Agile, TDD, CICD) Desirable Qualifications Demonstrated ability to design and operate data infrastructure that deliver high reliability for our customers. Familiarity working with datasets like Monitoring, Observability, Performance, etc.. Benefits & Perks Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more. To learn more, visit go.atlassian.com/perksandbenefits . About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh .

Posted 2 days ago

Apply

2.0 years

3 - 5 Lacs

Bengaluru

On-site

Wipro Limited (NYSE: WIT, BSE: 507685, NSE: WIPRO) is a leading technology services and consulting company focused on building innovative solutions that address clients’ most complex digital transformation needs. Leveraging our holistic portfolio of capabilities in consulting, design, engineering, and operations, we help clients realize their boldest ambitions and build future-ready, sustainable businesses. With over 230,000 employees and business partners across 65 countries, we deliver on the promise of helping our customers, colleagues, and communities thrive in an ever-changing world. For additional information, visit us at www.wipro.com. Job Description Do Research, design, develop, and modify computer vision and machine learning. algorithms and models, leveraging experience with technologies such as Caffe, Torch, or TensorFlow. - Shape product strategy for highly contextualized applied ML/AI solutions by engaging with customers, solution teams, discovery workshops and prototyping initiatives. - Help build a high-impact ML/AI team by supporting recruitment, training and development of team members. - Serve as evangelist by engaging in the broader ML/AI community through research, speaking/teaching, formal collaborations and/or other channels. Knowledge & Abilities: - Designing integrations of and tuning machine learning & computer vision algorithms - Research and prototype techniques and algorithms for object detection and recognition - Convolutional neural networks (CNN) for performing image classification and object detection. - Familiarity with Embedded Vision Processing systems - Open source tools & platforms - Statistical Modeling, Data Extraction, Analysis, - Construct, train, evaluate and tune neural networks Mandatory Skills: One or more of the following: Java, C++, Python Deep Learning frameworks such as Caffe OR Torch OR TensorFlow, and image/video vision library like OpenCV, Clarafai, Google Cloud Vision etc Supervised & Unsupervised Learning Developed feature learning, text mining, and prediction models (e.g., deep learning, collaborative filtering, SVM, and random forest) on big data computation platform (Hadoop, Spark, HIVE, and Tableau) *One or more of the following: Tableau, Hadoop, Spark, HBase, Kafka Experience: - 2-5 years of work or educational experience in Machine Learning or Artificial Intelligence - Creation and application of Machine Learning algorithms to a variety of real-world problems with large datasets. - Building scalable machine learning systems and data-driven products working with cross functional teams - Working w/ cloud services like AWS, Microsoft, IBM, and Google Cloud - Working w/ one or more of the following: Natural Language Processing, text understanding, classification, pattern recognition, recommendation systems, targeting systems, ranking systems or similar Nice to Have: - Contribution to research communities and/or efforts, including publishing papers at conferences such as NIPS, ICML, ACL, CVPR, etc. Education: BA/BS (advanced degree preferable) in Computer Science, Engineering or related technical field or equivalent practical experience Wipro is an Equal Employment Opportunity employer and makes all employment and employment-related decisions without regard to a person's race, sex, national origin, ancestry, disability, sexual orientation, or any other status protected by applicable law Product and Services Sales Manager ͏ ͏ ͏ ͏ Reinvent your world. We are building a modern Wipro. We are an end-to-end digital transformation partner with the boldest ambitions. To realize them, we need people inspired by reinvention. Of yourself, your career, and your skills. We want to see the constant evolution of our business and our industry. It has always been in our DNA - as the world around us changes, so do we. Join a business powered by purpose and a place that empowers you to design your own reinvention. Come to Wipro. Realize your ambitions. Applications from people with disabilities are explicitly welcome.

Posted 2 days ago

Apply

0 years

0 Lacs

Tamil Nadu, India

On-site

Linkedin logo

Company Description Greetings of the day and we are writing to introduce ourselves as "ANNAI LATEX PRIVATE LIMITED", Tirunelveli, Tamilnadu, a new leader in manufacturing of latex surgical gloves. Our plant is erected by leading Malaysian company KENDEK. We use "CERAMTEC" formers, the leaders in the manufacturing of formers from Germany. At "Annai Latex Private Limited "we offer wide range of products. • Sterile Latex Surgical Gloves - Pre-Powdered • Sterile Latex Surgical Gloves – Powder Free • Latex Examination Gloves – Pre-Powdered • Latex Examination Gloves – Powder Free • Nitrile Examination – Powder Free We are committed to provide our customers with high quality medical gloves to achieve total customer satisfaction by adopting continued improvement methods and products of excellent and consistent quality. We aim to ensure complete reliability, on time delivery with customer service excellence for all range of our products. Our Strength • Fully automated dipping line with online polymer quoting for powder free gloves. • Robotic stripping facility in the production. • In house facilities for Printing, Packing and Sterilization. Our Brands • Smart Hands - Sterile Latex Surgical Gloves. • Handy Hands - Nitrile Examination Gloves. • Glove Hive - Latex Examination Gloves | Pre-Powdered | Powder Free Role Description This is a full-time, on-site role located in Tamil Nadu, India for a Assistant Purchasing Manager at Annai Latex Private Limited. The Assistant Purchasing Manager will be responsible for managing and overseeing all aspects of the procurement process. This includes identifying purchasing needs, conducting market research, negotiating with suppliers, and ensuring that goods and services are delivered on time. Additional responsibilities include maintaining supplier relationships, managing purchase orders, and ensuring compliance with company policies and budgetary constraints. Qualifications • Experience in the relevant industry in procurement, supplier management, and negotiation skills • Understanding of inventory management, and logistics coordination • Analytical and problem-solving skills to assess market conditions and supplier performance • Strong verbal and written communication skills • Ability to work independently and as part of a team • Knowledge of ERP systems and procurement software is a plus • Bachelor's degree in Business Administration, Supply Chain Management, or a related field Send across your resume to hr@annailatex.com. Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

Required Skills : Python Processing of large quantities of text documents Extraction of text from Office and PDF documents Input json to an API, output json to an API Nifi (or similar technology compatible) Basic understanding of AI/ML concepts Database/Search engine/SOLR skills SQL – build queries to analyze, create and update databases Understands the basics of hybrid search Experience working with terabytes (TB) of data Basic OpenML/Python/Azure knowledge Scripting knowledge/experience in an Azure environment to automate Cloud systems experience related to search and databases Platforms: DataBricks Snowflake ESRI ArcGIS / SDE New GenAI app being developed Scope of work : 1. Ingest TB of data from multiple sources identified by the Ingestion Lead 2. Optimize data pipelines to improve on data processing, speed, and data availability 4. Make data available for end users from several hundred LAN and SharePoint areas 5. Monitor data pipelines daily and fix issues related to scripts, platforms, and ingestion 6. Work closely with the Ingestion Lead & Vendor on issues related to data ingestion Technical Skills demonstrated: 1. SOLR - Backend database 2. Nifi - Data movement 3. Pyspark - Data Processing 4. Hive & Oozie - For jobs monitoring 5. Querying - SQL, HQl and SOLR querying 6. SQL 7. Python Show more Show less

Posted 2 days ago

Apply

0 years

0 Lacs

Bengaluru, Karnataka, India

On-site

Linkedin logo

It's fun to work in a company where people truly BELIEVE in what they are doing! We're committed to bringing passion and customer focus to the business. Design and develop data-ingestion frameworks, real-time processing solutions, and data processing and transformation frameworks leveraging open source tools and data processing frameworks. Hands-on on technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python, Hadoop Platform, Hive, Presto, Druid, airflow Deep understanding of BigQuery architecture, best practices, and performance optimization. Proficiency in LookML for building data models and metrics. Experience with DataProc for running Hadoop/ Spark jobs on GCP. Knowledge of configuring and optimizing DataProc clusters. Offer system support as part of a support rotation with other team members. Operationalize open source data-analytic tools for enterprise use. Ensure data governance policies are followed by implementing or validating data lineage, quality checks, and data classification. Understand and follow the company development lifecycle to develop, deploy and deliver the solutions. Minimum Qualifications Bachelor's degree in Computer Science, CIS, or related field Experience on project(s) involving the implementation of software development life cycles (SDLC) GCP DATA ENGINEER If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! Not the right fit? Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest! Show more Show less

Posted 2 days ago

Apply

5.0 - 8.0 years

0 Lacs

Pune, Maharashtra, India

On-site

Linkedin logo

We are seeking a highly experienced Senior Data Software Engineer to join our dynamic team and tackle challenging projects that will enhance your skills and career. As a Senior Engineer, your contributions will be critical in designing and implementing Data solutions across a variety of projects. The ideal candidate will possess deep experience in Big Data and associated technologies, with a strong emphasis on Apache Spark, Python, Azure and AWS. Responsibilities Develop and execute end-to-end Data solutions to meet complex business needs Work collaboratively with interdisciplinary teams to comprehend project needs and deliver superior software solutions Apply your expertise in Apache Spark, Python, Azure and AWS to create scalable and efficient data processing systems Maintain and enhance the performance, security, and scalability of Data applications Keep abreast of industry trends and technological advancements to foster continuous improvement in our development practices Requirements 5-8 years of direct experience in Data and related technologies Advanced knowledge and hands-on experience with Apache Spark High-level proficiency with Hadoop and Hive Proficiency in Python Prior experience with AWS and Azure native Cloud data services Technologies Hadoop Hive Show more Show less

Posted 2 days ago

Apply

Exploring Hive Jobs in India

Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.

Top Hiring Locations in India

  1. Bangalore
  2. Hyderabad
  3. Pune
  4. Mumbai
  5. Delhi

These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.

Average Salary Range

The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.

Career Path

Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.

Related Skills

Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.

Interview Questions

  • What is Hive and how does it differ from traditional databases? (basic)
  • Explain the difference between HiveQL and SQL. (medium)
  • How do you optimize Hive queries for better performance? (advanced)
  • What are the different types of tables supported in Hive? (basic)
  • Can you explain the concept of partitioning in Hive tables? (medium)
  • What is the significance of metastore in Hive? (basic)
  • How does Hive handle schema evolution? (advanced)
  • Explain the use of SerDe in Hive. (medium)
  • What are the various file formats supported by Hive? (basic)
  • How do you troubleshoot performance issues in Hive queries? (advanced)
  • Describe the process of joining tables in Hive. (medium)
  • What is dynamic partitioning in Hive and when is it used? (advanced)
  • How can you schedule jobs in Hive? (medium)
  • Discuss the differences between bucketing and partitioning in Hive. (advanced)
  • How do you handle null values in Hive? (basic)
  • Explain the role of the Hive execution engine in query processing. (medium)
  • Can you give an example of a complex Hive query you have written? (advanced)
  • What is the purpose of the Hive metastore? (basic)
  • How does Hive support ACID transactions? (medium)
  • Discuss the advantages and disadvantages of using Hive for data processing. (advanced)
  • How do you secure data in Hive? (medium)
  • What are the limitations of Hive? (basic)
  • Explain the concept of bucketing in Hive and when it is used. (medium)
  • How do you handle schema evolution in Hive? (advanced)
  • Discuss the role of Hive in the Hadoop ecosystem. (basic)

Closing Remark

As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!

cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies