Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
12.0 - 17.0 years
3 - 7 Lacs
Kolkata
Work from Office
Project Role : Data Management Practitioner Project Role Description : Maintain the quality and compliance of an organizations data assets. Design and implement data strategies, ensuring data integrity and enforcing governance policies. Establish protocols to handle data, safeguard sensitive information, and optimize data usage within the organization. Design and advise on data quality rules and set up effective data compliance policies. Must have skills : Data Architecture Principles Good to have skills : NAMinimum 12 year(s) of experience is required Educational Qualification : any graduate Summary :As a Data Management Practitioner, you will be responsible for maintaining the quality and compliance of an organization's data assets. Your role involves designing and implementing data strategies, ensuring data integrity, enforcing governance policies, and optimizing data usage within the organization. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Design and advise on data quality rules- Set up effective data compliance policies- Ensure data integrity and enforce governance policies- Optimize data usage within the organization Professional & Technical Skills: - Must To Have Skills: Proficiency in Data Architecture Principles- Strong understanding of data management best practices- Experience in designing and implementing data strategies- Knowledge of data governance and compliance policies- Ability to optimize data usage for organizational benefit Additional Information:- The candidate should have a minimum of 12 years of experience in Data Architecture Principles- This position is based at our Kolkata office- A degree in any graduate is required Qualification any graduate
Posted 1 week ago
15.0 - 20.0 years
5 - 9 Lacs
Navi Mumbai
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Collibra Data Quality & Observability Good to have skills : Databricks Unified Data Analytics PlatformMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will design, build, and configure applications to meet business process and application requirements. A typical day involves collaborating with various teams to understand their needs, developing solutions that align with business objectives, and ensuring that applications are functioning optimally. You will engage in problem-solving activities, participate in team meetings, and contribute to the overall success of projects by leveraging your expertise in application development. Roles & Responsibilities:- Expected to be an SME.- Collaborate and manage the team to perform.- Responsible for team decisions.- Engage with multiple teams and contribute on key decisions.- Provide solutions to problems for their immediate team and across multiple teams.- Mentor junior professionals to enhance their skills and knowledge.- Continuously assess and improve application performance and user experience. Professional & Technical Skills: - Must To Have Skills: Proficiency in Collibra Data Quality & Observability.- Good To Have Skills: Experience with Databricks Unified Data Analytics Platform.- Strong understanding of data governance principles and practices.- Experience in application development lifecycle and methodologies.- Proficient in troubleshooting and resolving application issues.- Ability to work collaboratively in a team-oriented environment. Additional Information:- The candidate should have minimum 7.5 years of experience in Collibra Data Quality & Observability.- This position is based in Mumbai.- A 15 years full time education is required. Qualification 15 years full time education
Posted 1 week ago
7.0 - 12.0 years
0 - 2 Lacs
Bengaluru
Work from Office
Technical Manager Java & Big Data | Telecom Domain | Immediate Joiners Preferred Company: Subex Ltd. Location: Bangalore (Hybrid Work Model) Experience: 8 to 12 Years Employment Type: Full-Time Industry: Telecom / Software Development Notice Period: Immediate to 30 Days Preferred Job Summary: Subex is looking for a Technical Manager (Java & Big Data) to join our dynamic R&D team in Bangalore. If you are an experienced Java professional with strong leadership abilities and a passion for cutting-edge telecom solutions, this is your opportunity to lead impactful projects and drive innovation. Key Responsibilities: Lead and mentor a team of developers working on Java-based telecom solutions. Architect, design, and develop scalable and high-performance systems using Java and Spring Boot. Manage end-to-end Big Data integration projects involving tools like Kafka, Spark, Hive, etc. Collaborate with cross-functional teams including product managers, QA, DevOps, and frontend engineers. Implement and maintain CI/CD pipelines using Jenkins, Helm, ArgoCD. Ensure timely project delivery, efficient resource allocation, and risk mitigation. Promote Agile methodologies and foster a culture of continuous improvement. Stay updated on the latest cloud and big data trends to ensure technology alignment. Required Skills: Core Technologies: Java, Spring Boot Big Data Ecosystem: Kafka, Hive, Apache Spark (or NiFi preferred) Database: PostgreSQL Frontend: Angular (basic exposure acceptable) CI/CD Tools: Jenkins, Helm, ArgoCD Cloud Platforms: Azure or GCP Agile Methodologies: Strong understanding and hands-on experience Domain Expertise: Telecom domain experience is a strong plus Leadership: Minimum 3 years in a people management role with excellent team leadership and communication skills Candidate Profile: Total experience of 8+ years in software development. Proven 3+ years in leading technical teams or managing engineering resources. Passion for solving complex problems and working in a fast-paced, dynamic environment. Strong interpersonal and communication skills, both verbal and written. Why Join Us? Work with cutting-edge technologies in the telecom domain. Be part of an innovation-driven, inclusive, and collaborative environment. Opportunity to lead key digital transformation projects. Flexible hybrid work model and career advancement opportunities. Interested? Let’s Connect! Send your updated resume to: Rahul K – rahul.k@subex.com Apply now and be part of the future of telecom innovation! Key Skills: Java, Spring Boot, Big Data, Apache Kafka, Apache Spark, Hive, PostgreSQL, Angular, Jenkins, Helm, ArgoCD, Azure, GCP, CI/CD, Agile, Scrum, Technical Leadership, Telecom Domain, Software Development, Team Management
Posted 1 week ago
7.0 - 12.0 years
9 - 13 Lacs
Pune
Work from Office
Project Role : Data Platform Engineer Project Role Description : Assists with the data platform blueprint and design, encompassing the relevant data platform components. Collaborates with the Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : NAMinimum 7.5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As a Data Platform Engineer, you will be responsible for assisting with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform. Your typical day will involve collaborating with Integration Architects and Data Architects to ensure cohesive integration between systems and data models. Roles & Responsibilities:- Assist with the blueprint and design of the data platform components using Databricks Unified Data Analytics Platform.- Collaborate with Integration Architects and Data Architects to ensure cohesive integration between systems and data models.- Develop and maintain data pipelines and ETL processes using Databricks Unified Data Analytics Platform.- Design and implement data security and access controls for the data platform.- Troubleshoot and resolve issues related to the data platform and data pipelines. Professional & Technical Skills: - Must To Have Skills: Experience with Databricks Unified Data Analytics Platform.- Must To Have Skills: Strong understanding of data modeling and database design principles.- Good To Have Skills: Experience with cloud-based data platforms such as AWS or Azure.- Good To Have Skills: Experience with data security and access controls.- Good To Have Skills: Experience with data visualization tools such as Tableau or Power BI. Additional Information:- The candidate should have a minimum of 7.5 years of experience in Databricks Unified Data Analytics Platform.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering impactful data-driven solutions.- This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices.- Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualification Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 1 week ago
8.0 - 11.0 years
15 - 19 Lacs
Kolkata
Work from Office
Project Role : Technology Architect Project Role Description : Review and integrate all application requirements, including functional, security, integration, performance, quality and operations requirements. Review and integrate the technical architecture requirements. Provide input into final decisions regarding hardware, network products, system software and security. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Cloud Data ArchitectureMinimum 5 year(s) of experience is required Educational Qualification : BE or MCA Summary :As a Technology Architect, you will be responsible for reviewing and integrating all application requirements, including functional, security, integration, performance, quality, and operations requirements. Your typical day will involve reviewing and integrating technical architecture requirements, providing input into final decisions regarding hardware, network products, system software, and security, and utilizing Databricks Unified Data Analytics Platform to deliver impactful data-driven solutions. Roles & Responsibilities6 or more years of experience in implementing data ingestion pipelines from multiple sources and creating end to end data pipeline on Databricks platform. 2 or more years of experience using Python, PySpark or Scala. Experience in Databricks on cloud. Exp in any of AWS, Azure or GCPe, ETL, data engineering, data cleansing and insertion into a data warehouse Must have Skills like Databricks, Cloud Data Architecture, Python Programming Language, Data Engineering. Professional AttributesExcellent writing, communication and presentation skills. Eagerness to learn and develop self on an ongoing basis. Excellent client facing and interpersonal skills. Qualification BE or MCA
Posted 1 week ago
5.0 - 10.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As an Application Lead, you will be responsible for leading the effort to design, build, and configure applications, acting as the primary point of contact. Your typical day will involve working with PySpark and collaborating with cross-functional teams to deliver high-quality solutions. Roles & Responsibilities:- Lead the design, development, and deployment of PySpark-based applications, ensuring high-quality solutions are delivered on time and within budget.- Collaborate with cross-functional teams, including business analysts, data scientists, and software developers, to ensure that applications meet business requirements and are scalable and maintainable.- Act as the primary point of contact for all application-related issues, providing technical guidance and support to team members and stakeholders.- Ensure that applications are designed and developed in accordance with industry best practices, including coding standards, testing methodologies, and deployment processes.- Stay up-to-date with the latest trends and technologies in PySpark and related fields, and apply this knowledge to improve the quality and efficiency of application development. Professional & Technical Skills: - Must To Have Skills: Strong experience in PySpark.- Good To Have Skills: Experience with other big data technologies such as Hadoop, Hive, and Spark.- Solid understanding of software development principles, including object-oriented programming, design patterns, and agile methodologies.- Experience with database technologies such as SQL and NoSQL.- Experience with cloud platforms such as AWS or Azure.- Strong problem-solving and analytical skills, with the ability to troubleshoot complex issues and provide effective solutions. Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark.- The ideal candidate will possess a strong educational background in computer science or a related field, along with a proven track record of delivering high-quality software solutions.- This position is based at our Bangalore, Hyderabad, Chennai and Pune Offices.- Mandatory office (RTO) for 2- 3 days and have to work on 2 shifts (Shift A- 10:00am to 8:00pm IST and Shift B - 12:30pm to 10:30 pm IST) Qualification Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 1 week ago
3.0 - 8.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Apache Hadoop Good to have skills : NAMinimum 3 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. Your typical day will involve collaborating with team members to develop innovative solutions and enhance application functionality. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work related problems.- Develop and implement software solutions to meet business requirements.- Collaborate with cross-functional teams to enhance application functionality.- Conduct code reviews and provide technical guidance to team members.- Troubleshoot and debug applications to ensure optimal performance.- Stay updated on industry trends and technologies to drive continuous improvement. Professional & Technical Skills: - Must To Have Skills: Proficiency in Apache Hadoop.- Strong understanding of distributed computing principles.- Experience with data processing frameworks like MapReduce and Spark.- Hands-on experience in designing and implementing scalable data pipelines.- Knowledge of Hadoop ecosystem components such as HDFS, YARN, and Hive. Additional Information:- The candidate should have a minimum of 3 years of experience in Apache Hadoop.- This position is based at our Hyderabad office.- A 15 years full-time education is required. Qualification 15 years full time education
Posted 1 week ago
3.0 - 8.0 years
5 - 9 Lacs
Hyderabad
Work from Office
Project Role : Application Developer Project Role Description : Design, build and configure applications to meet business process and application requirements. Must have skills : Databricks Unified Data Analytics Platform Good to have skills : Python (Programming Language)Minimum 3 year(s) of experience is required Educational Qualification : Any Graduation Summary :As an Application Developer, you will be responsible for designing, building, and configuring applications to meet business process and application requirements. You will play a crucial role in the development and implementation of software solutions. Roles & Responsibilities:- Expected to perform independently and become an SME.- Required active participation/contribution in team discussions.- Contribute in providing solutions to work-related problems.- Develop and implement software solutions based on business requirements.- Collaborate with team members to ensure successful project delivery.- Conduct code reviews and provide feedback to improve code quality.- Stay updated with the latest technologies and trends in software development.- Assist in troubleshooting and resolving technical issues. Professional & Technical Skills: - Must To Have Skills: Proficiency in Databricks Unified Data Analytics Platform.- Good To Have Skills: Experience with Python (Programming Language).- Strong understanding of data analytics and machine learning concepts.- Experience in building and optimizing data pipelines.- Knowledge of cloud platforms like AWS or Azure.- Familiarity with Agile development methodologies. Additional Information:- The candidate should have a minimum of 3 years of experience in Databricks Unified Data Analytics Platform.- This position is based at our Hyderabad office.- A Any Graduation is required. Qualification Any Graduation
Posted 1 week ago
5.0 - 10.0 years
10 - 14 Lacs
Hyderabad
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : 15 years full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. Your day will involve overseeing the application development process and ensuring successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure successful project delivery- Mentor and guide team members Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark- Strong understanding of big data processing- Experience with data pipelines and ETL processes- Hands-on experience in building scalable applications- Knowledge of cloud platforms and services Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark- This position is based at our Hyderabad office- A 15 years full-time education is required Qualification 15 years full time education
Posted 1 week ago
5.0 - 10.0 years
10 - 14 Lacs
Pune
Work from Office
Project Role : Application Lead Project Role Description : Lead the effort to design, build and configure applications, acting as the primary point of contact. Must have skills : PySpark Good to have skills : NAMinimum 5 year(s) of experience is required Educational Qualification : A Engineering graduate preferably Computer Science graduate 15 years of full time education Summary :As an Application Lead, you will lead the effort to design, build, and configure applications, acting as the primary point of contact. You will oversee the application development process and ensure successful project delivery. Roles & Responsibilities:- Expected to be an SME- Collaborate and manage the team to perform- Responsible for team decisions- Engage with multiple teams and contribute on key decisions- Provide solutions to problems for their immediate team and across multiple teams- Lead the application development process- Ensure timely project delivery- Provide technical guidance and support to the team Professional & Technical Skills: - Must To Have Skills: Proficiency in PySpark- Strong understanding of big data processing- Experience with cloud platforms like AWS or Azure- Hands-on experience in designing and implementing scalable applications- Knowledge of data modeling and database management Additional Information:- The candidate should have a minimum of 5 years of experience in PySpark- This position is based at our Pune office- A Engineering graduate preferably Computer Science graduate 15 years of full time education is required Qualification A Engineering graduate preferably Computer Science graduate 15 years of full time education
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
_VOIS Intro About _VOIS: _VO IS (Vodafone Intelligent Solutions) is a strategic arm of Vodafone Group Plc, creating value and enhancing quality and efficiency across 28 countries, and operating from 7 locations: Albania, Egypt, Hungary, India, Romania, Spain and the UK. Over 29,000 highly skilled individuals are dedicated to being Vodafone Group’s partner of choice for talent, technology, and transformation. We deliver the best services across IT, Business Intelligence Services, Customer Operations, Business Operations, HR, Finance, Supply Chain, HR Operations, and many more. Established in 2006, _VO IS has evolved into a global, multi-functional organisation, a Centre of Excellence for Intelligent Solutions focused on adding value and delivering business outcomes for Vodafone. _VOIS Centre Intro About _VOIS India: In 2009, _VO IS started operating in India and now has established global delivery centres in Pune, Bangalore and Ahmedabad. With more than 14,500 employees, _VO IS India supports global markets and group functions of Vodafone, and delivers best-in-class customer experience through multi-functional services in the areas of Information Technology, Networks, Business Intelligence and Analytics, Digital Business Solutions (Robotics & AI), Commercial Operations (Consumer & Business), Intelligent Operations, Finance Operations, Supply Chain Operations and HR Operations and more. Job Role Related Content (Role specific) Ey Responsibilities Design and Build Data Pipelines:** Develop scalable data pipelines using AWS services like AWS Glue, Amazon Redshift, and S3. Create efficient ETL processes for data extraction, transformation, and loading into data warehouses and lakes. Build and manage applications using Python, SQL, Databricks, and various AWS technologies. Utilize QuickSight to create insightful data visualizations and dashboards. Quickly develop innovative Proof-of-Concept (POC) solutions to address emerging needs. Provide support and manage the ongoing operation of data services. Automate repetitive tasks and build reusable frameworks to improve efficiency. Work with teams to design and develop data products that support marketing and other business functions. Ensure data services are reliable, maintainable, and seamlessly integrated with existing systems. Required Skills And Experience Bachelor’s degree in Computer Science, Engineering, or a related field. Technical Skills: Proficiency in Python with Pandans, PySpark. Hands-on experience with AWS services including S3, Glue Lambda, API Gateway, and SQS. Knowledge of data processing tools like Spark, Hive, Kafka, and Airflow. Experience with batch job scheduling and managing data dependencies. Experience with QuickSight or similar tools. Familiarity with DevOps automation tools like GitLab, Bitbucket, Jenkins, and Maven. Understanding of Delta is would be an added advantage. _VOIS Equal Opportunity Employer Commitment India _VO IS is proud to be an Equal Employment Opportunity Employer. We celebrate differences and we welcome and value diverse people and insights. We believe that being authentically human and inclusive powers our employees’ growth and enables them to create a positive impact on themselves and society. We do not discriminate based on age, colour, gender (including pregnancy, childbirth, or related medical conditions), gender identity, gender expression, national origin, race, religion, sexual orientation, status as an individual with a disability, or other applicable legally protected characteristics. As a result of living and breathing our commitment, our employees have helped us get certified as a Great Place to Work in India for four years running. We have been also highlighted among the Top 5 Best Workplaces for Diversity, Equity, and Inclusion , Top 10 Best Workplaces for Women , Top 25 Best Workplaces in IT & IT-BPM and 14th Overall Best Workplaces in India by the Great Place to Work Institute in 2023. These achievements position us among a select group of trustworthy and high-performing companies which put their employees at the heart of everything they do. By joining us, you are part of our commitment. We look forward to welcoming you into our family which represents a variety of cultures, backgrounds, perspectives, and skills! Apply now, and we’ll be in touch! Show more Show less
Posted 1 week ago
5.0 - 10.0 years
25 - 35 Lacs
Hyderabad, Pune, Bengaluru
Work from Office
Job Description Data Engineer/Lead Required Minimum Qualifications Bachelors degree in computer science, CIS, or related field 5-10 years of IT experience in software engineering or related field Experience on project(s) involving the implementation of software development life cycles (SDLC) Primary Skills : PySpark, SQL, GCP EcoSystem(Biq Query, Cloud Composer, DataProc) Design and develop data-ingestion frameworks, real-time processing solutions, and data processing and transformation frameworks leveraging open source tools and data processing frameworks. Hands-on on technologies such as Kafka, Apache Spark (SQL, Scala, Java), Python, Hadoop Platform, Hive, airflow Experience in GCP Cloud Composer, Big Query, DataProc Offer system support as part of a support rotation with other team members. Operationalize open-source data-analytic tools for enterprise use. Ensure data governance policies are followed by implementing or validating data lineage, quality checks, and data classification. Understand and follow the company development lifecycle to develop, deploy and deliver.
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
The Digital Solutions and Innovation (DSI) team within the Citi Internal Audit Innovation function is looking for a Business Analytics Analyst (Officer) to join the Internal Audit Analytics Team. The Analytics Team works with members of Internal Audit to identify opportunities, design, develop and implement analytics in support or the performance of audit activities, along with automation activities to promote efficiencies and expand coverage . The candidate must be proficient in the development and use of analytics technology and tools to provide analytical insight and automated solutions to enhance audit efficiency and effectiveness and have functional knowledge of banking processes and related risks and controls. Key Responsibilities: Participating in the innovative use of audit analytics through direct participation in all phases of audits Supporting the defining of data needs, designing, and executing audit analytics during audits in accordance with the audit methodology and professional standards. Supports execution of automated routines to help focus audit testing. Executes innovation solutions and pre-defined analytics in accordance with standard A&A procedures. Assisting audit teams in performing moderately complex audits related to a specific area of the bank: Consumer Banking, Investment Banking, Risk, Finance, Compliance, and/or Technology. Provide support to other members of the Analytics and Automation team, and wider Digital Solutions and Innovation team. Strong verbal and written communication skills to clearly articulate analytics requirements and results. Develop professional relationships with audit teams to assist in the definition of analytics and automation opportunities. Develop effective working relationships with technology and business teams of the area being audited, to facilitate understanding or processes and sourcing of data. Promoting continuous improvement in all aspects of audit automation activities (e.g., technical environment, software, operating procedures). Key Qualifications And Competencies At least 3 years of business / audit analyst experience in providing analytical techniques and automated solutions to business needs. Work experience in global environment and in large company. Excellent technical, programming and databases skills Excellent analytical ability to understand business processes and related risks and controls and develop innovative audit analytics based upon audit needs. Strong interpersonal and multicultural skills for interfacing with all levels of internal and external audit and management. Self-driven, problem-solving approach. Understanding of procedures and following these to keep quality and security of processes. Detail oriented approach, consistently performing diligent self-reviews of work product, and attention to data completeness and accuracy. Data literate, with the ability to understand and effectively communicate what data means to technical and non-technical stakeholders. Proficiency in one or more of the following technical skills is required : SQL Python Hadoop ecosystem (Hive, Sqoop, PySpark etc). Alteryx Proficiency in at least one of the following Data Visualization tools is a plus: Tableau MicroStrategy Cognos Experience of the following areas would be a plus: Business Intelligence including use of statistics, data modelling, data mining and predictive analytics. Application of data science tools and techniques to advance the insights obtained through the interrogation of data. Working with non-structured data such as PDF files. Banking Businesses (i.e., Institutional Clients Group, Consumer, Corporate Functions) or areas of expertise (i.e. Anti-Money Laundering, Regulatory Reporting) Big Data analysis including big data dedicated use like HUE, Hive Project Management / Solution Development Life Cycle Exposure to Process mining software such as Celonis What we offer: A chance to develop in a highly innovative environment where you can use the newest technologies in a top-quality organizational culture. Professional development in a truly global environment Inclusive and friendly corporate culture where gender diversity and equality is widely recognized A supportive workplace for professionals returning to the office from childcare leave An enjoyable and challenging learning path, which leads to a deep understanding of Citi’s products and services. Yearly discretionary bonus and competitive social benefits (private medical care, multisport, life insurance, award-winning pension scheme, holiday allowance, flexible working schedule and other) This job description provides a high-level review of the types of work performed. Other job-related duties may be assigned as required. ------------------------------------------------------ Job Family Group: Decision Management ------------------------------------------------------ Job Family: Business Analysis ------------------------------------------------------ Time Type: ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
End-to-end development and delivery of MIS reports and dashboards supporting credit card and lending portfolio acquisition, early engagement, existing customer management, rewards, retention and attrition. Partner with business stakeholders to understand requirements and deliver actionable insights through automated reporting solutions. Maintain and optimize existing SAS-based reporting processes while leading the migration to Python/PySpark on Big Data platforms. Design and build interactive dashboards in Tableau for senior leadership and regulatory reporting. Build and implement an automated audit framework to ensure data accuracy, completeness and consistency across the entire reporting life cycle. Collaborate with Data Engineering and IT teams to leverage data lakes and enterprise data platforms. Mentor junior analysts and contribute to knowledge sharing across teams. Support ad-hoc analysis and audits with quick turnaround and attention to data integrity. Qualifications: Experience in MIS reporting, data analytics, or BI in the Banking/Financial Services, with a strong focus on Credit Cards. Proficiency in SAS for data extraction, manipulation, and automation. Advanced skills in Python and PySpark, particularly in big data environments (e.g., Hadoop, Hive, Databricks). Expertise in Tableau for dashboard design and data storytelling. ------------------------------------------------------ Job Family Group: Management Development Programs ------------------------------------------------------ Job Family: Undergraduate ------------------------------------------------------ Time Type: ------------------------------------------------------ Citi is an equal opportunity employer, and qualified candidates will receive consideration without regard to their race, color, religion, sex, sexual orientation, gender identity, national origin, disability, status as a protected veteran, or any other characteristic protected by law. If you are a person with a disability and need a reasonable accommodation to use our search tools and/or apply for a career opportunity review Accessibility at Citi. View Citi’s EEO Policy Statement and the Know Your Rights poster. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Company Description Blend is a premier AI services provider, committed to co-creating meaningful impact for its clients through the power of data science, AI, technology, and people. With a mission to fuel bold visions, Blend tackles significant challenges by seamlessly aligning human expertise with artificial intelligence. The company is dedicated to unlocking value and fostering innovation for its clients by harnessing world-class people and data-driven strategy. We believe that the power of people and AI can have a meaningful impact on your world, creating more fulfilling work and projects for our people and clients. For more information, visit www.blend360.com. Job Description We are looking for a skilled Test Engineer with experience in automated testing, rollback testing, and continuous integration environments. You will be responsible for ensuring the quality and reliability of our software products through automated testing strategies and robust test frameworks Design and execute end-to-end test strategies for data pipelines, ETL/ELT jobs, and database systems. Validate data quality, completeness, transformation logic, and integrity across distributed data systems (e.g., Hadoop, Spark, Hive). Develop Python-based automated test scripts to validate data flows, schema validations, and business rules. Perform complex SQL queries to verify large datasets across staging and production environments. Identify data issues and work closely with data engineers to resolve discrepancies. Contribute to test data management, environment setup, and regression testing processes. Work collaboratively with data engineers, business analysts, and QA leads to ensure accurate and timely data delivery. Participate in sprint planning, reviews, and defect triaging as part of the Agile process. Qualifications 4+ of experience in Data Testing, Big Data Testing, and/or Database Testing. Strong programming skills in Python for automation and scripting. Expertise in SQL for writing complex queries and validating large datasets. Experience with Big Data technologies such as Hadoop, Hive, Spark, HDFS, Kafka (any combination is acceptable). Hands-on experience with ETL/ELT testing and validating data transformations and pipelines. Exposure to cloud data platforms like AWS (Glue, S3, Redshift), Azure (Data Lake, Synapse), or GCP is a plus. Familiarity with test management and defect tracking tools like JIRA, TestRail, or Zephyr. Experience with CI/CD pipelines and version control (e.g., Git) is an advantage. Show more Show less
Posted 1 week ago
7.0 - 12.0 years
9 - 14 Lacs
Pune, Hinjewadi
Work from Office
Job Summary Synechron is seeking an experienced and technically proficient Senior PySpark Data Engineer to join our data engineering team. In this role, you will be responsible for developing, optimizing, and maintaining large-scale data processing solutions using PySpark. Your expertise will support our organizations efforts to leverage big data for actionable insights, enabling data-driven decision-making and strategic initiatives. Software Requirements Required Skills: Proficiency in PySpark Familiarity with Hadoop ecosystem components (e.g., HDFS, Hive, Spark SQL) Experience with Linux/Unix operating systems Data processing tools like Apache Kafka or similar streaming platforms Preferred Skills: Experience with cloud-based big data platforms (e.g., AWS EMR, Azure HDInsight) Knowledge of Python (beyond PySpark), Java or Scala relevant to big data applications Familiarity with data orchestration tools (e.g., Apache Airflow, Luigi) Overall Responsibilities Design, develop, and optimize scalable data processing pipelines using PySpark. Collaborate with data engineers, data scientists, and business analysts to understand data requirements and deliver solutions. Implement data transformations, aggregations, and extraction processes to support analytics and reporting. Manage large datasets in distributed storage systems, ensuring data integrity, security, and performance. Troubleshoot and resolve performance issues within big data workflows. Document data processes, architectures, and best practices to promote consistency and knowledge sharing. Support data migration and integration efforts across varied platforms. Strategic Objectives: Enable efficient and reliable data processing to meet organizational analytics and reporting needs. Maintain high standards of data security, compliance, and operational durability. Drive continuous improvement in data workflows and infrastructure. Performance Outcomes & Expectations: Efficient processing of large-scale data workloads with minimum downtime. Clear, maintainable, and well-documented code. Active participation in team reviews, knowledge transfer, and innovation initiatives. Technical Skills (By Category) Programming Languages: Required: PySpark (essential); Python (needed for scripting and automation) Preferred: Java, Scala Databases/Data Management: Required: Experience with distributed data storage (HDFS, S3, or similar) and data warehousing solutions (Hive, Snowflake) Preferred: Experience with NoSQL databases (Cassandra, HBase) Cloud Technologies: Required: Familiarity with deploying and managing big data solutions on cloud platforms such as AWS (EMR), Azure, or GCP Preferred: Cloud certifications Frameworks and Libraries: Required: Spark SQL, Spark MLlib (basic familiarity) Preferred: Integration with streaming platforms (e.g., Kafka), data validation tools Development Tools and Methodologies: Required: Version control systems (e.g., Git), Agile/Scrum methodologies Preferred: CI/CD pipelines, containerization (Docker, Kubernetes) Security Protocols: Optional: Basic understanding of data security practices and compliance standards relevant to big data management Experience Requirements Minimum of 7+ years of experience in big data environments with hands-on PySpark development. Proven ability to design and implement large-scale data pipelines. Experience working with cloud and on-premises big data architectures. Preference for candidates with domain-specific experience in finance, banking, or related sectors. Candidates with substantial related experience and strong technical skills in big data, even from different domains, are encouraged to apply. Day-to-Day Activities Develop, test, and deploy PySpark data processing jobs to meet project specifications. Collaborate in multi-disciplinary teams during sprint planning, stand-ups, and code reviews. Optimize existing data pipelines for performance and scalability. Monitor data workflows, troubleshoot issues, and implement fixes. Engage with stakeholders to gather new data requirements, ensuring solutions are aligned with business needs. Contribute to documentation, standards, and best practices for data engineering processes. Support the onboarding of new data sources, including integration and validation. Decision-Making Authority & Responsibilities: Identify performance bottlenecks and propose effective solutions. Decide on appropriate data processing approaches based on project requirements. Escalate issues that impact project timelines or data integrity. Qualifications Bachelors degree in Computer Science, Information Technology, or related field. Equivalent experience considered. Relevant certifications are preferred: Cloudera, Databricks, AWS Certified Data Analytics, or similar. Commitment to ongoing professional development in data engineering and big data technologies. Demonstrated ability to adapt to evolving data tools and frameworks. Professional Competencies Strong analytical and problem-solving skills, with the ability to model complex data workflows. Excellent communication skills to articulate technical solutions to non-technical stakeholders. Effective teamwork and collaboration in a multidisciplinary environment. Adaptability to new technologies and emerging trends in big data. Ability to prioritize tasks effectively and manage time in fast-paced projects. Innovation mindset, actively seeking ways to improve data infrastructure and processes.
Posted 1 week ago
5.0 - 10.0 years
0 Lacs
Pune, Bengaluru
Hybrid
Job Summary We are seeking a highly skilled Hadoop Developer / Lead Data Engineer to join our data engineering team based in Bangalore or Pune. The ideal candidate will have extensive experience with Hadoop ecosystem technologies and cloud-based big data platforms, particularly on Google Cloud Platform (GCP). This role involves designing, developing, and maintaining scalable data ingestion, processing, and transformation frameworks to support enterprise data needs. Minimum Qualifications Bachelor's degree in computer science, Computer Information Systems, or related technical field. 5-10 years of experience in software engineering or data engineering, with a strong focus on big data technologies. Proven experience in implementing software development life cycles (SDLC) in enterprise environments. Technical Skills & Expertise Big Data Technologies: Expertise in Hadoop platform, Hive , and related ecosystem tools. Strong experience with Apache Spark (using SQL, Scala, and/or Java). Experience with real-time data streaming using Kafka . Programming Languages & Frameworks: Proficient in PySpark and SQL for data processing and transformation. Strong coding skills in Python . Cloud Technologies (Google Cloud Platform): Experience with BigQuery for data warehousing and analytics. Familiarity with Cloud Composer (Airflow) for workflow orchestration. Hands-on with DataProc for managed Spark and Hadoop clusters. Responsibilities Design, develop, and implement scalable data ingestion and transformation pipelines using Hadoop and GCP services. Build real-time and batch data processing solutions leveraging Spark, Kafka, and related technologies. Ensure data quality, governance, and lineage by implementing automated validation and classification frameworks. Collaborate with cross-functional teams to deploy and operationalize data analytics tools at enterprise scale. Participate in production support and on-call rotations to maintain system reliability. Follow established SDLC practices to deliver high-quality, maintainable solutions. Preferred Qualifications Experience leading or mentoring data engineering teams. Familiarity with CI/CD pipelines and DevOps best practices for big data environments. Strong communication skills with an ability to collaborate across teams.
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Our Purpose Mastercard powers economies and empowers people in 200+ countries and territories worldwide. Together with our customers, we’re helping build a sustainable economy where everyone can prosper. We support a wide range of digital payments choices, making transactions secure, simple, smart and accessible. Our technology and innovation, partnerships and networks combine to deliver a unique set of products and services that help people, businesses and governments realize their greatest potential. Title And Summary Senior Specialist, Product Management-2 Who is Mastercard? Mastercard is a global technology company in the payments industry. Our mission is to connect and power an inclusive, digital economy that benefits everyone, everywhere by making transactions safe, simple, smart, and accessible. Using secure data and networks, partnerships and passion, our innovations and solutions help individuals, financial institutions, governments, and businesses realize their greatest potential. Our decency quotient, or DQ, drives our culture and everything we do inside and outside of our company. With connections across more than 210 countries and territories, we are building a sustainable world that unlocks priceless possibilities for all. Overview The Senior Specialist of Product Management will report to the Vice President of Account Level Management (ALM) in Global Consumer Products & Processing responsible for client management and management of analytical solutions related to ALM. This Senior Specialist will partner with our internal stakeholders in Regional Teams and our Product Management Team to manage on-going strategic relationships with key clients through our Global ALM suite of solutions. This candidate should have the ability to collaborate across a diverse group of internal stakeholders & regional partners, effectively manage multiple priorities and demands, and possess a deep understanding of transaction processing & the credit card industry. Role This Senior Specialist will lead development and execution of analytical solutions across multiple customers. The role will require strong partnership skills as this Senior Specialist will be partnering with our regional lead in the US to ensure accurate execution of customer contract terms and partnering with customers to set-up testing and validation for the solutions leveraged. Quarterly monitoring and reporting on solution validity will be required as a measure of success. The Senior Specialist in this role will manage the relationship with the client on solution deployment and any impacts, while also identifying opportunities to scale the solutions improving customer penetration in partnership with the ALM Product Lead. This role will require the ability to collaborate across a diverse group of internal global stakeholders & regional partners, effectively manage multiple priorities and demands, and possess a deep understanding of transaction processing & the payments card industry as it continues to evolve into a digital footprint. The role will require availability during other key regional time zones. This candidate should be intellectually curious, energetic, a self-starter and able to operate with a sense of urgency. In addition, the role requires an individual who can demonstrate discipline in prioritizing efforts, and the ability to be comfortable managing through ambiguity. Candidate needs to have strong communication skills with an ability to refine and adjust communication to gain support & sponsorship from Executive Management, and experience in driving execution and alignment with Regional Teams, who may not share the same sense of prioritization or urgency. All About You A Bachelor’s Degree in business, finance, marketing, product management, or related field, or equivalent work experience (required) Knowledge / Experience Master’s Degree or equivalent work experience (preferred) Knowledge of Mastercard product and services suite (desirable) Proficient in python or R, Hive, Tableau, MSBI & applications, and VBA Experience with statistical modelling and predictive analytical techniques (preferred) Experience in overseeing multiple projects and initiatives concurrently Understanding of competitive offerings and industry trends Experience in working collaboratively in a cross-functional role operating with a sense of urgency to drive results Ability to influence and motivate others to achieve objectives Ability to think big and bold, innovate with intention, and deliver scalable solutions Ability to digest complex ideas and organize them into executable tasks Strong work ethic and a master of time management, organization, detail orientation, task initiation, planning and prioritization Self-starter and motivated to work independently with a proven track record of delivering success while operating within a team environment Corporate Security Responsibility All activities involving access to Mastercard assets, information, and networks comes with an inherent risk to the organization and, therefore, it is expected that every person working for, or on behalf of, Mastercard is responsible for information security and must: Abide by Mastercard’s security policies and practices; Ensure the confidentiality and integrity of the information being accessed; Report any suspected information security violation or breach, and Complete all periodic mandatory security trainings in accordance with Mastercard’s guidelines. R-243117 Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description Amazon Music is awash in data! To help make sense of it all, the DISCO (Data, Insights, Science & Optimization) team: (i) enables the Consumer Product Tech org make data driven decisions that improve the customer retention, engagement and experience on Amazon Music. We build and maintain automated self-service data solutions, data science models and deep dive difficult questions that provide actionable insights. We also enable measurement, personalization and experimentation by operating key data programs ranging from attribution pipelines, northstar weblabs metrics to causal frameworks. (ii) delivering exceptional Analytics & Science infrastructure for DISCO teams, fostering a data-driven approach to insights and decision making. As platform builders, we are committed to constructing flexible, reliable, and scalable solutions to empower our customers. (iii) accelerates and facilitates content analytics and provides independence to generate valuable insights in a fast, agile, and accurate way. This domain provides analytical support for the below topics within Amazon Music: Programming / Label Relations / PR / Stations / Livesports / Originals / Case & CAM. DISCO team enables repeatable, easy, in depth analysis of music customer behaviors. We reduce the cost in time and effort of analysis, data set building, model building, and user segmentation. Our goal is to empower all teams at Amazon Music to make data driven decisions and effectively measure their results by providing high quality, high availability data, and democratized data access through self-service tools. If you love the challenges that come with big data then this role is for you. We collect billions of events a day, manage petabyte scale data on Redshift and S3, and develop data pipelines using Spark/Scala EMR, SQL based ETL, Airflow and Java services. We are looking for talented, enthusiastic, and detail-oriented Data Engineer, who knows how to take on big data challenges in an agile way. Duties include big data design and analysis, data modeling, and development, deployment, and operations of big data pipelines. You'll help build Amazon Music's most important data pipelines and data sets, and expand self-service data knowledge and capabilities through an Amazon Music data university. DISCO team develops data specifically for a set of key business domains like personalization and marketing and provides and protects a robust self-service core data experience for all internal customers. We deal in AWS technologies like Redshift, S3, EMR, EC2, DynamoDB, Kinesis Firehose, and Lambda. Your team will manage the data exchange store (Data Lake) and EMR/Spark processing layer using Airflow as orchestrator. You'll build our data university and partner with Product, Marketing, BI, and ML teams to build new behavioural events, pipelines, datasets, models, and reporting to support their initiatives. You'll also continue to develop big data pipelines. Key job responsibilities Deep understanding of data, analytical techniques, and how to connect insights to the business, and you have practical experience in insisting on highest standards on operations in ETL and big data pipelines. With our Amazon Music Unlimited and Prime Music services, and our top music provider spot on the Alexa platform, providing high quality, high availability data to our internal customers is critical to our customer experiences. Assist the DISCO team with management of our existing environment that consists of Redshift and SQL based pipelines. The activities around these systems will be well defined via standard operation procedures (SOP) and typically involve approving data access requests, subscribing or adding new data to the environment SQL data pipeline management (creating or updating existing pipelines) Perform maintenance tasks on the Redshift cluster. Assist the team with the management of our next-generation AWS infrastructure. Tasks includes infrastructure monitoring via CloudWatch alarms, infrastructure maintenance through code changes or enhancements, and troubleshooting/root cause analysis infrastructure issues that arise, and in some cases this resource may also be asked to submit code changes based on infrastructure issues that arise. About The Team Amazon Music is an immersive audio entertainment service that deepens connections between fans, artists, and creators.From personalized music playlists to exclusive podcasts,concert livestreams to artist merch,we are innovating at some of the most exciting intersections of music and culture.We offer experiences that serve all listeners with our different tiers of service:Prime members get access to all music in shuffle mode,and top ad-free podcasts,included with their membership;customers can upgrade to Music Unlimited for unlimited on-demand access to 100 million songs including millions in HD,Ultra HD,spatial audio and anyone can listen for free by downloading Amazon Music app or via Alexa-enabled devices.Join us for opportunity to influence how Amazon Music engages fans, artists,and creators on a global scale. Basic Qualifications 2+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Experience with one or more scripting language (e.g., Python, KornShell) Experience in Unix Experience in Troubleshooting the issues related to Data and Infrastructure issues. Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Knowledge of distributed systems as it pertains to data storage and computing Experience in building or administering reporting/analytics platforms Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 15 SEZ Job ID: A2838395 Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Kolkata, West Bengal, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Position Summary We are seeking an Apache Hadoop - Subject Matter Expert (SME) who will be responsible for designing, optimizing, and scaling Spark-based data processing systems. This role involves hands-on experience in Spark architecture and core functionalities, focusing on building resilient, high-performance distributed data systems. You will collaborate with engineering teams to deliver high-throughput Spark applications and solve complex data challenges in real-time processing, big data analytics, and streaming. If you’re passionate about working in fast-paced, dynamic environments and want to be part of the cutting edge of data solutions, this role is for you. We’re Looking For Someone Who Can Design and optimize distributed Spark-based applications, ensuring low-latency, high-throughput performance for big data workloads. Troubleshooting: Provide expert-level troubleshooting for any data or performance issues related to Spark jobs and clusters. Data Processing Expertise: Work extensively with large-scale data pipelines using Spark's core components (Spark SQL, DataFrames, RDDs, Datasets, and structured streaming). Performance Tuning: Conduct deep-dive performance analysis, debugging, and optimization of Spark jobs to reduce processing time and resource consumption. Cluster Management: Collaborate with DevOps and infrastructure teams to manage Spark clusters on platforms like Hadoop/YARN, Kubernetes, or cloud platforms (AWS EMR, GCP Dataproc, etc.). Real-time Data: Design and implement real-time data processing solutions using Apache Spark Streaming or Structured Streaming. What Makes You The Right Fit For This Position Expert in Apache Spark: In-depth knowledge of Spark architecture, execution models, and the components (Spark Core, Spark SQL, Spark Streaming, etc.) Data Engineering Practices: Solid understanding of ETL pipelines, data partitioning, shuffling, and serialization techniques to optimize Spark jobs. Big Data Ecosystem: Knowledge of related big data technologies such as Hadoop, Hive, Kafka, HDFS, and YARN. Performance Tuning and Debugging: Demonstrated ability to tune Spark jobs, optimize query execution, and troubleshoot performance bottlenecks. Experience with Cloud Platforms: Hands-on experience in running Spark clusters on cloud platforms such as AWS, Azure, or GCP. Containerization & Orchestration: Experience with containerized Spark environments using Docker and Kubernetes is a plus. Good To Have Certification in Apache Spark or related big data technologies. Experience working with Acceldata's data observability platform or similar tools for monitoring Spark jobs. Demonstrated experience with scripting languages like Bash, PowerShell, and Python. Familiarity with concepts related to application, server, and network security management. Possession of certifications from leading Cloud providers (AWS, Azure, GCP), and expertise in Kubernetes would be significant advantages. Show more Show less
Posted 1 week ago
6.0 - 10.0 years
8 - 18 Lacs
Bengaluru
Work from Office
Job Overview: The primary purpose of this role is to translate business requirements and functional specifications into logical program designs and to deliver dashboards, schema, data pipelines, and software solutions. This includes developing, configuring, or modifying data components within various complex business and/or enterprise application solutions in various computing environments. You will partner closely with multiple Business partners, Product Owners, Data Strategy, Data Platform, Data Science and Machine Learning (MLOps) teams to drive innovative data products for end users. Additionally, you will help shape overall solution & data products, develop scalable solutions through best-in-class engineering practices. Key Responsibilities: • Data Pipeline Development : Designing, building, and maintaining robust data pipelines to move data from various sources (e.g., databases, external APIs, logs) to centralized data systems, such as data lakes or warehouses. • Data Integration : Integrating data from multiple sources and ensuring it's processed in a consistent, usable format. This involves transforming, cleaning, and validating data to meet the needs of products, analysts and data scientists. • Database Management : Creating, managing, and optimizing databases for storing large amounts of structured and unstructured data. Ensuring high availability, scalability, and security of data storage solutions. • Performance Optimization : Identifying and resolving issues related to the speed and efficiency of data systems. This could include optimizing queries, storage systems, and improving overall system architecture. • Automation: Automating routine tasks, such as data extraction, transformation, and loading (ETL), to ensure smooth data flows with minimal manual intervention. • Collaboration with Data Teams: Working closely with Work closely with product managers, UX/UI designers, and other stakeholders to understand data requirements and ensure data is in the right format for analysis and modeling. • Data Governance and Quality : Ensuring data integrity and compliance with data governance policies, including data quality standards, privacy regulations (e.g., GDPR), and security protocols. • Monitoring and Troubleshooting : Continuously monitoring data pipelines and databases for any disruptions or errors and troubleshooting any issues that arise to ensure continuous data flow. • Tool and Technology Management : Staying up to date with emerging data tools, technologies, and best practices in order to improve data systems and infrastructure. • Documentation and Reporting : Documenting data systems, pipeline processes, and data architectures, providing clear instructions for the team to follow, and ensuring that the architecture is understandable for stakeholders. Required Skills & Experience: • Knowledge of data bases, relational DBs such as Postgres as well as NoSQL systems such as MongoDB, Kafka • Knowledge of Big Data systems such as Hadoop, Hive/Pig, Trino etc • Experience in SQL like query languages (SQL, MQL, HQL etc) • Experience in building data pipelines • Experience in Software lifecycle tools for CI/CD and version control system such as GIT • Familiarity with Agile methodologies is a plus. - General: - Strong problem-solving skills and the ability to troubleshoot complex software issues. - Familiarity with version control systems, particularly Git. - Experience with Agile methodologies (e.g., Scrum, Kanban). - Excellent communication skills, both verbal and written, with the ability to collaborate in a team environment. - Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent work experience. Preferred Qualifications: • Experience work in GCP and familiarity with Kubernetes, Big Query, GCS, Airflow • Problem-Solving: Strong analytical and problem-solving skills. • Communication: Excellent verbal and written communication skills, with the ability to convey technical concepts to non-technical stakeholders. • Team Player: Ability to work collaboratively in a team-oriented environment. • Adaptability: Flexibility to adapt to changing business needs and priorities.
Posted 1 week ago
0 years
0 Lacs
India
On-site
Ready to be pushed beyond what you think you’re capable of? At Coinbase, our mission is to increase economic freedom in the world. It’s a massive, ambitious opportunity that demands the best of us, every day, as we build the emerging onchain platform — and with it, the future global financial system. To achieve our mission, we’re seeking a very specific candidate. We want someone who is passionate about our mission and who believes in the power of crypto and blockchain technology to update the financial system. We want someone who is eager to leave their mark on the world, who relishes the pressure and privilege of working with high caliber colleagues, and who actively seeks feedback to keep leveling up. We want someone who will run towards, not away from, solving the company’s hardest problems. Our work culture is intense and isn’t for everyone. But if you want to build the future alongside others who excel in their disciplines and expect the same from you, there’s no better place to be. The mission of the Platform Product Group engineers is to build a trusted, scalable and compliant platform to operate with speed, efficiency and quality. Our teams build and maintain the platforms critical to the existence of Coinbase. There are many teams that make up this group which include Product Foundations (i.e. Identity, Payment, Risk, Proofing & Regulatory, Finhub), Machine Learning, Customer Experience, and Infrastructure. As a machine learning engineer, you will play a pivotal role in constructing essential infrastructure for the open financial system. This involves harnessing diverse and extensive data sources, including the blockchain, to grant millions of individuals access to cryptocurrency while simultaneously identifying and thwarting malicious entities. Your impact extends beyond safeguarding Coinbase, as you'll have the opportunity to employ machine learning to enhance the overall user experience. This includes imbuing intelligence into recommendations, risk assessment, chatbots, and various other aspects, making our product not only secure but also exceptionally user-friendly. What you’ll be doing (ie. job duties): Investigate and harness cutting-edge machine learning methodologies, including deep learning, large language models (LLMs), and graph neural networks, to address diverse challenges throughout the company. These challenges encompass areas such as fraud detection, feed ranking, recommendation systems, targeting, chatbots, and blockchain mining. Develop and deploy robust, low-maintenance applied machine learning solutions in a production environment. Create onboarding codelabs, tools, and infrastructure to democratize access to machine learning resources across Coinbase, fostering a culture of widespread ML utilization. What we look for in you (ie. job requirements): 5+yrs of industry experience as a machine learning and software engineer Experience building backend systems at scale with a focus on data processing/machine learning/analytics. Experience with at least one ML model: LLMs, GNN, Deep Learning, Logistic Regression, Gradient Boosting trees, etc. Working knowledge in one or more of the following: data mining, information retrieval, advanced statistics or natural language processing, computer vision. Exhibit our core cultural values: add positive energy, communicate clearly, be curious, and be a builder. Nice to haves: BS, MS, PhD degree in Computer Science, Machine Learning, Data Mining, Statistics, or related technical field. Knowledge of Apache Airflow, Spark, Flink, Kafka/Kinesis, Snowflake, Hadoop, Hive. Experience with Python. Experience with model interpretability, responsible AI. Experience with data analysis and visualization. Job #: GPML05IN *Answers to crypto-related questions may be used to evaluate your onchain experience. Please be advised that each candidate may submit a maximum of four applications within any 30-day period. We encourage you to carefully evaluate how your skills and interests align with Coinbase's roles before applying. Commitment to Equal Opportunity Coinbase is committed to diversity in its workforce and is proud to be an Equal Opportunity Employer. All qualified applicants will receive consideration for employment without regard to race, color, religion, creed, gender, national origin, age, disability, veteran status, sex, gender expression or identity, sexual orientation or any other basis protected by applicable law. Coinbase will also consider for employment qualified applicants with criminal histories in a manner consistent with applicable federal, state and local law. For US applicants, you may view the Know Your Rights notice here . Additionally, Coinbase participates in the E-Verify program in certain locations, as required by law. Coinbase is also committed to providing reasonable accommodations to individuals with disabilities. If you need a reasonable accommodation because of a disability for any part of the employment process, please contact us at accommodations[at]coinbase.com to let us know the nature of your request and your contact information. For quick access to screen reading technology compatible with this site click here to download a free compatible screen reader (free step by step tutorial can be found here) . Global Data Privacy Notice for Job Candidates and Applicants Depending on your location, the General Data Protection Regulation (GDPR) and California Consumer Privacy Act (CCPA) may regulate the way we manage the data of job applicants. Our full notice outlining how data will be processed as part of the application procedure for applicable locations is available here. By submitting your application, you are agreeing to our use and processing of your data as required. For US applicants only, by submitting your application you are agreeing to arbitration of disputes as outlined here. Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Kanayannur, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
10.0 years
0 Lacs
Trivandrum, Kerala, India
On-site
At EY, you’ll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we’re counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY GDS – Data and Analytics (D&A) – Cloud Architect As part of our EY-GDS D&A (Data and Analytics) team, we help our clients solve complex business challenges with the help of data and technology. We dive deep into data to extract the greatest value and discover opportunities in key business and functions like Banking, Insurance, Manufacturing, Healthcare, Retail, Manufacturing and Auto, Supply Chain, and Finance. The opportunity We’re looking for Managers (GTM +Cloud/ Big Data Architects) with strong technology and data understanding having proven delivery capability in delivery and pre sales. This is a fantastic opportunity to be part of a leading firm as well as a part of a growing Data and Analytics team. Your Key Responsibilities Have proven experience in driving Analytics GTM/Pre-Sales by collaborating with senior stakeholder/s in the client and partner organization in BCM, WAM, Insurance. Activities will include pipeline building, RFP responses, creating new solutions and offerings, conducting workshops as well as managing in flight projects focused on cloud and big data. Need to work with client in converting business problems/challenges to technical solutions considering security, performance, scalability etc. [ 10- 15 years] Need to understand current & Future state enterprise architecture. Need to contribute in various technical streams during implementation of the project. Provide product and design level technical best practices Interact with senior client technology leaders, understand their business goals, create, architect, propose, develop and deliver technology solutions Define and develop client specific best practices around data management within a Hadoop environment or cloud environment Recommend design alternatives for data ingestion, processing and provisioning layers Design and develop data ingestion programs to process large data sets in Batch mode using HIVE, Pig and Sqoop, Spark Develop data ingestion programs to ingest real-time data from LIVE sources using Apache Kafka, Spark Streaming and related technologies Skills And Attributes For Success Architect in designing highly scalable solutions Azure, AWS and GCP. Strong understanding & familiarity with all Azure/AWS/GCP /Bigdata Ecosystem components Strong understanding of underlying Azure/AWS/GCP Architectural concepts and distributed computing paradigms Hands-on programming experience in Apache Spark using Python/Scala and Spark Streaming Hands on experience with major components like cloud ETLs,Spark, Databricks Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Knowledge of Spark and Kafka integration with multiple Spark jobs to consume messages from multiple Kafka partitions Solid understanding of ETL methodologies in a multi-tiered stack, integrating with Big Data systems like Cloudera and Databricks. Strong understanding of underlying Hadoop Architectural concepts and distributed computing paradigms Experience working with NoSQL in at least one of the data stores - HBase, Cassandra, MongoDB Good knowledge in apache Kafka & Apache Flume Experience in Enterprise grade solution implementations. Experience in performance bench marking enterprise applications Experience in Data security [on the move, at rest] Strong UNIX operating system concepts and shell scripting knowledge To qualify for the role, you must have Flexible and proactive/self-motivated working style with strong personal ownership of problem resolution. Excellent communicator (written and verbal formal and informal). Ability to multi-task under pressure and work independently with minimal supervision. Strong verbal and written communication skills. Must be a team player and enjoy working in a cooperative and collaborative team environment. Adaptable to new technologies and standards. Participate in all aspects of Big Data solution delivery life cycle including analysis, design, development, testing, production deployment, and support Responsible for the evaluation of technical risks and map out mitigation strategies Experience in Data security [on the move, at rest] Experience in performance bench marking enterprise applications Working knowledge in any of the cloud platform, AWS or Azure or GCP Excellent business communication, Consulting, Quality process skills Excellent Consulting Skills Excellence in leading Solution Architecture, Design, Build and Execute for leading clients in Banking, Wealth Asset Management, or Insurance domain. Minimum 7 years hand-on experience in one or more of the above areas. Minimum 10 years industry experience Ideally, you’ll also have Strong project management skills Client management skills Solutioning skills What We Look For People with technical experience and enthusiasm to learn new things in this fast-moving environment What Working At EY Offers At EY, we’re dedicated to helping our clients, from start–ups to Fortune 500 companies — and the work we do with them is as varied as they are. You get to work with inspiring and meaningful projects. Our focus is education and coaching alongside practical experience to ensure your personal development. We value our employees and you will be able to control your own development with an individual progression plan. You will quickly grow into a responsible role with challenging and stimulating assignments. Moreover, you will be part of an interdisciplinary environment that emphasizes high quality and knowledge exchange. Plus, we offer: Support, coaching and feedback from some of the most engaging colleagues around Opportunities to develop new skills and progress your career The freedom and flexibility to handle your role in a way that’s right for you EY | Building a better working world EY exists to build a better working world, helping to create long-term value for clients, people and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform and operate. Working across assurance, consulting, law, strategy, tax and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today. Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.
These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.
The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.
Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.
Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.
As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.