Jobs
Interviews

3301 Big Data Jobs - Page 11

Setup a job Alert
JobPe aggregates results for easy application access, but you actually apply on the job portal directly.

3.0 - 6.0 years

7 - 11 Lacs

Bengaluru

Work from Office

Advanced Programming Skills in Python,Scala,GoStrong expertise in developing and maintaining microservices in Go (or other similar languages), with the ability to lead and mentor others in this area. Extensive exposure in developing Big Data Applications ,Data Engineering ,ETL and Data Analytics . Cloud ExpertiseIn-depth knowledge of IBM Cloud or similar cloud platforms, with a proven track record of deploying and managing cloud-native applications. Leadership and CollaborationAbility to lead cross-functional teams, work closely with product owners, and drive platform enhancements while mentoring junior team members. Security and ComplianceStrong understanding of security best practices and compliance standards, with experience ensuring that platforms meet or exceed these requirements. Analytical and Problem-Solving Skills: Excellent problem-solving abilities with a proven track record of resolving complex issues in a multi-tenant environment. Required education Bachelor's Degree Required technical and professional expertise 4-7 years' experience primarily in using Apache Spark, Kafka and SQL preferably in Data Engineering projects with a strong TDD approach. Advanced Programming Skills in languages like Python ,Java , Scala with proficiency in SQL Extensive exposure in developing Big Data Applications, Data Engineering, ETL ETL tools and Data Analytics. Exposure in Data Modelling, Data Quality and Data Governance. Extensive exposure in Creating and maintaining Data pipelines - workflows to move data from various sources into data warehouses or data lakes. Cloud ExpertiseIn-depth knowledge of IBM Cloud or similar cloud platforms, with a proven track record of developing, deploying and managing cloud-native applications. Good to have Front-End Development experienceReact, Carbon, and Node for managing and improving user-facing portals. Leadership and CollaborationAbility to lead cross-functional teams, work closely with product owners, and drive platform enhancements while mentoring junior team members. Security and ComplianceStrong understanding of security best practices and compliance standards, with experience ensuring that platforms meet or exceed these requirements. Analytical and Problem-Solving Skills: Excellent problem-solving abilities with a proven track record of resolving complex issues in a multi-tenant environment. Preferred technical and professional experience Hands on experience with Data Analysis & Querying using SQLs and considerable exposure to ETL processes. Expertise in developing Cloud applications with High Volume Data processing. Worked on building scalable Microservices components using various API development frameworks.

Posted 1 week ago

Apply

4.0 - 6.0 years

20 - 30 Lacs

Gurugram

Work from Office

Key Skills: Spark, Scala, Flink, Big Data, Structured Streaming, Data Architecture, Data Modeling, NoSQL, AWS, Azure, GCP, JVM tuning, Performance Optimization. Roles & Responsibilities: Design and build robust data architectures for large-scale data processing. Develop and maintain data models and database designs. Work on stream processing engines like Spark Structured Streaming and Flink. Perform analytical processing on Big Data using Spark. Administer, configure, monitor, and tune performance of Spark workloads and distributed JVM-based systems. Lead and support cloud deployments across AWS, Azure, or Google Cloud Platform. Manage and deploy Big Data technologies such as Business Data Lakes and NoSQL databases. Experience Requirements: Extensive experience working with large data sets and Big Data technologies. 4-6 years of hands-on experience in Spark/Big Data tech stack. At least 4 years of experience in Scala. At least 2+ years of experience in cloud deployment (AWS, Azure, or GCP). Successfully completed at least 2 product deployments involving Big Data technologies. Education: B.Tech M.Tech (Dual), B.Tech.

Posted 1 week ago

Apply

6.0 - 10.0 years

4 - 8 Lacs

Hyderabad

Work from Office

We are looking for a skilled Senior Oracle Data Engineer to join our team at Apps Associates (I) Pvt. Ltd, with 6-10 years of experience in the IT Services & Consulting industry. Roles and Responsibility Design, develop, and implement data engineering solutions using Oracle technologies. Collaborate with cross-functional teams to identify and prioritize project requirements. Develop and maintain large-scale data pipelines and architectures. Ensure data quality, integrity, and security through data validation and testing procedures. Optimize data processing workflows for improved performance and efficiency. Troubleshoot and resolve complex technical issues related to data engineering projects. Job Requirements Strong knowledge of Oracle Data Engineering concepts and technologies. Experience with data modeling, design, and development. Proficiency in programming languages such as Java or Python. Excellent problem-solving skills and attention to detail. Ability to work collaboratively in a team environment. Strong communication and interpersonal skills.

Posted 1 week ago

Apply

6.0 - 10.0 years

5 - 8 Lacs

Greater Noida

Work from Office

Job Description- • Experience on implementing Snowflake utilities, SnowSQL, SnowPipe, Big Data model techniques using Python • Expertise in deploying Snowflake features such as data sharing, events and lake-house patterns • Proficiency in RDBMS, complex SQL, PL/SQL, performance tuning and troubleshoot • Provide resolution to an extensive range of complicated data pipeline related problems • Experience in Data Migration from RDBMS to Snowflake cloud data warehouse • Experience with data security and data access controls and design • Build processes supporting data transformation, data structures, metadata, dependency & workload management • Experience in Snowflake modelling - roles, schema, databases. • Extensive hands-on expertise with Creation of Stored Procedures and Advance SQL. • Collaborate with data engineers, analysts, and stakeholders to understand data requirements and translate them into DBT models. • Develop and enforce best practices for version control, testing, and documentation of DBT models. • Build and manage data quality checks and validation processes within the DBT pipelines. • Ability to optimize SQL queries for performance and efficiency. • Good to have experience in Azure services such as ADF, Databricks, Data pipeline building. • Excellent analytical and problem-solving skills. • Have working experience in an Agile methodology. • Knowledge of DevOps processes (including CI/CD) , PowerBI • Excellent communication skills.

Posted 1 week ago

Apply

10.0 - 15.0 years

8 - 14 Lacs

Bengaluru

Work from Office

The Role Will be the prime owner for building technical relationship with the GSI globally. Will be responsible for building and deepening technical relationships across large accounts, verticals and Business units within the GSI. Help create new markets and opportunities for Hitachis solutions and technologies Responsibilities Have a clear understanding of the GSIs go-to-market plans and key initiatives Identify and build relationships with the key stakeholders within the GSI who own the GTM and Key Initiatives Build relevance and create preference for Hitachis technologies and solutions within the GSIs GTM and Initiatives Use business acumen to translate technology and solutions to actual use cases and benefits for the business or end customer Evaluate the need for a particular solution or use case and quantify the market potential of the solution Identify expertise within Hitachi, GSI and third party solution providers and be the person who can co-ordinate and build a team of experts to build solutions relevant to the market Be the coordinator and chair of the solutions council to help build and co-create market relevant solutions with the GSI Work with the sales teams to help design the GTM and solution roll out globally. Provide inputs to Hitachis product and engineering teams on the need for product improvements and solution requirements based on feedback from the markets or the GSI Provides technical expertise and support to the HDS GSI Sales team, including detailed product, solutions and services knowledge. Work with other stake holders of the business globally to define roles and responsibilities of each stakeholder and execute the technical plan Work closely with the GAM and business leaders to overachieve revenue and margin targets. Identify skill / technology gaps within the GSI and help bridge them through training Ensure that GSIs are updated and are full competent on Hitachi technology and solutions. Ensure that Hitachi is aware and competent on the GSI solutions and competitor solutions offered to the GSIs Build detailed solution architecture, quantify the resource requirement for unique solution builds, plan build timelines and justify budgets for co-creation activities What you'll Bring Minimum of 10+ yrs of technical experience, preferably with/for SI or large OEMs with a breadth of offerings on Infrastructure, applications, Digital and IoT Deep Knowledge on key infrastructure technologies ( storage, compute, network, virtualization), cloud offerings and services ( AWS, Azure, Oracle) Should have independently designed and delivered innovative cloud solutions with elements of analytics and big data Should have the ability to keep track of cloud offerings in the markets and design competitive differentiators for Hitachi solutions Should have experience of having built and sold SaaS offerings Should have experience on articulating the value of automation, Should have the ability to identify opportunities and design the high level architecture for automation projects Have a clear understanding of digital transformation use cases for each vertical and identify gaps in the GSI solution/offerings and proven ability to help bridge the gaps by working with engineering or third party provider Must be a continuous learner of new technologies and solutions in the Big Data and IoT space. Must have Deep IT industry knowledge. Experience in building and managing solutions and enablement programs, including all program lifecycle aspects (such as Opportunity Information gathering, building a business case, high & Low level architecture definition, resource plans, etc.) He or she should be an accomplished presenter of sales and technical material to both small and large groups. Excellent communication skills, both written and oral are required. Possess the ability to effectively present ideas, and properly describe problems and propose solutions. The candidate must be a self-starter and possess excellent time management skills. The successful candidate will function independently to determine proper methods and procedures and may supervise other personnel as part of the role. Demonstrated self-confidence and self-reliance in conducting demonstrations, training, and experience working in Business-to-Business situations. Competencies Strong knowledge of strategic outsourcing (SO) and systems integration (SI) business strategies as well as pursuit and delivery models highly desired. Working knowledge of IT SaaS, SaaS, PaaS and cloud computing highly desired. Application and Solutions experience specific to SAP, HANA, Oracle, are required. Platform experience is needed with VMWare, Hyper-V, Windows and Linux. Integrating server and storage architectures is a must. Should have the ability to decipher the OT ecosystem and use proven Hitachi technologies to bridge the IT/OT Gap. Must truly be a believer and practitioner of automation, cost efficiencies and innovation. Must be self driven and a team player. Sharing of knowledge and working with colleagues is an absolute must. Must be adept at working in a transparent and open work environment Must have experience building technical solutions, working with system integrators, resolving customer IT problems, and the ability to work well in a team setting. A comprehensive understanding of Information technology is required with emphasis on Applications, Compute, Networking Platform, Storage Fabric and Storage Platform. Critical thinking, reasoning, and problem solving are an essential part of this position. Requires the ability to interact with various customers spread across the industry, geography, including executives, scientists, engineers, analysts, system administrators, architects and other computer and business professionals. A broad background and general knowledge of technical, administrative, and financial areas and a basic understanding of related terms and business processes a plus.

Posted 1 week ago

Apply

6.0 - 11.0 years

15 - 25 Lacs

Bengaluru

Work from Office

Hiring Data Engineer in Bangalore with 6+ years experience in below skills: Must Have: - Big Data technologies: Hadoop, MapReduce, Spark, Kafka, Flink - Programming languages: Java/ Scala/ Python - Cloud: Azure, AWS, Google Cloud - Docker/Kubernetes Required Candidate profile - Strong in Communication Skills - Experience with relational SQL/ NoSQL databases- Postgres & Cassandra - Experience with ELK stack - Immediate Join is plus - Must be ready to work from office

Posted 1 week ago

Apply

4.0 - 9.0 years

6 - 10 Lacs

Hyderabad, Bengaluru, Secunderabad

Work from Office

We are looking for a Senior Software Engineer to join our IMS Team in Bangalore. This is an amazing opportunity to work on Big Data technologies involved in content ingestion. The team consists of 10-12 engineers and is reporting to the Sr Manager. We have a great skill set in Spark, Java, Scala, Hive, Sql, XSLT, AWS EMR, S3, etc and we would love to speak with you if you have skills in the same. About You experience, education, skills, and accomplishments: Work Experience: Minimum 4 years experience in Big Data projects involved in content ingestion, curation, transformation Technical Skill: Spark, Python/Java, Scala, AWS EMR, S3, SQS, Hive, XSLT Education (bachelors degree in computer science, mechanical engineering, or related degree or at least 4 years of equivalent relevant experience) It would be great if you also had: Experience in analyzing and optimizing performance Exposure to any automation test frameworks Databricks Java Python programming What will you be doing in this role? Active role in planning, estimation, design, development and testing of large-scale, enterprise-wide initiatives to build or enhance a platform or custom applications that will be used for the acquisition, transformation, entity extraction, mining of content on behalf of business units across Clarivate Analytics Troubleshooting and addressing production issues within the given SLA Coordination with global representatives and teams

Posted 1 week ago

Apply

14.0 - 19.0 years

7 - 12 Lacs

Noida

Work from Office

We are looking for a Senior Manager-ML Ops to join our Technology team at Clarivate. You will get the opportunity to work in a cross-cultural work environment while working on the latest web technologies with an emphasis on user-centered design. About You (Skills & Experience Required) Bachelors or masters degree in computer science, Engineering, or a related field. Overall 14+ years of experience including DevOps, machine learning operations and data engineering domain Proven experience in managing and leading technical teams. Strong understanding of MLOps practices, tools, and frameworks. Proficiency in data pipelines, data cleaning, and feature engineering is essential for preparing data for model training. Knowledge of programming languages (Python, R), and version control systems (Git) is necessary for building and maintaining MLOps pipelines. Experience with MLOps-specific tools and platforms (e.g., Kubeflow, MLflow, Airflow) can streamline MLOps workflows. DevOps principles, including CI/CD pipelines, infrastructure as code (IaC), and monitoring is helpful for automating ML workflows. Familiarity with cloud platforms (AWS, GCP, Azure) and their associated services (e.g., compute, storage, ML platforms) is essential for deploying and scaling ML models. Familiarity with container orchestration tools like Kubernetes can help manage and scale ML workloads efficiently. It would be great if you also had, Experience with big data technologies (Hadoop, Spark). Knowledge of data governance and security practices. Familiarity with DevOps practices and tools. What will you be doing in this role? Data Science Model Deployment & Monitoring : Oversee the deployment of machine learning models into production environments. Ensure continuous monitoring and performance tuning of deployed models. Implement robust CI/CD pipelines for model updates and rollbacks. Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions. Communicate project status, risks, and opportunities to stakeholders. Provide technical guidance and support to team members. Infrastructure & Automation : Design and manage scalable infrastructure for model training and deployment. Automate repetitive tasks to improve efficiency and reduce errors. Ensure the infrastructure meets security and compliance standards. Innovation & Improvement : Stay updated with the latest trends and technologies in MLOps. Identify opportunities for process improvements and implement them. Drive innovation within the team to enhance the MLOps capabilities.

Posted 1 week ago

Apply

5.0 - 9.0 years

10 - 15 Lacs

Hyderabad, Ahmedabad, Gurugram

Work from Office

Impact: You will enable S&P Global to showcase our proprietary data, combine it with curated alternative data, enrich it with value-added services, and deliver it through clients preferred channels. This will help clients make better investment and business decisions with confidence. What You Can Expect: An unmatched experience in handling large volumes of data, analytics, visualization, and services over cloud technologies, along with insight into the product development lifecycle to convert ideas into revenue-generating streams. Responsibilities: We are looking for a self-motivated, enthusiastic, and passionate software engineer to develop technology solutions for the S&P Global product. The ideal candidate thrives in a highly technical role and will design and develop software using cutting-edge technologies, including data pipelines, large-scale data processing, machine learning, and multi-cloud environments. The development is already underway, so the candidate is expected to get up to speed quickly and start contributing. Actively participate in all scrum ceremonies and effectively follow agile best practices. Play a key role in the development team to build high-quality, high-performance, scalable code. Produce technical design documents and conduct technical walkthroughs. Document and demonstrate solutions using technical design documents, diagrams, and stubbed code. Collaborate effectively with technical and non-technical stakeholders. Respond to and resolve production issues. What We Are Looking For: A minimum of 5 to 9 years of significant experience in application development. Proficient with software development lifecycle (SDLC) methodologies, including agile and test-driven development. Experience working with high-volume data and computationally intensive systems. Experience in optimizing memory management and performance tuning in programming. Proficiency in development environments, including integrated development environments (IDEs), version control systems, continuous integration, unit testing tools, and defect management tools. Domain knowledge in the financial industry and capital markets is a plus. Excellent communication skills are essential, with strong verbal and writing proficiencies. Ability to mentor teams, innovate, experiment, and present business ideas to key stakeholders. Required Technical Skills: Strong skills in developing solutions involving relational database technologies. Experience building data pipelines. Familiarity with cloud-managed services and platforms. Ability to develop custom solutions for data processing and integration. Experience in developing scalable and performant data APIs. Proficiency in writing infrastructure as code for development environments. Experience providing analytical capabilities using business intelligence tools. Capability to manage data delivery at scale to geographically distributed clients. Desirable Technical Skills: Familiarity with front-end technologies and API development. Understanding of microservices architecture and cloud technologies. Experience with big data and analytics solutions. Knowledge of relational and NoSQL databases. Familiarity with data orchestration and processing frameworks.

Posted 1 week ago

Apply

3.0 - 8.0 years

15 - 19 Lacs

Mumbai

Work from Office

Project description As an Engineer within the Public Markets Technology Department, you will play a pivotal role in developing and enhancing best-in-class applications that support global investment and co-investment strategies. This role involves close collaboration with both technology and business teams to modernize and evolve the technology landscape, enabling strategic and operational excellence. Responsibilities PMWB/CosmosoProvide operational support for PMs in London and Hongkong for Order Entry and Position Management. Currently in scope is EPM, SSG, and AE. Future expansion for LRA in London and Australia. EPM ValidatoroOnboard, Enhance and Maintain Fund Performance data for users in London and Hongkong EPM ExploreroNAV and Performance loading configuration changes. This requires Python code and config changes. BatchoCosmos and PMWB SOD SupportoEPM Risk DataoData Fabric CMF Pipelines expand monitoring to anticipate abnormal run times. SupportoPartner with the London ARS-FICC and FCT teams to make judgement calls and address failures independent of Toronto office Skills Must have University degree in Engineering or Computer Science preferred. 3+ years experience in software development. Strong knowledge and demonstrated experience with Python is a must Experience with Java is an asset. Experience with Relational and Non-Relational Databases is an asset. Demonstrated experience developing applications on AWS. AWS certification is preferred. Experience in the capital markets industry is a nice to have, including knowledge of various financial products and derivatives. Strong desire to learn how the business operates and how technology helps them achieve their goals. Must have an entrepreneurial attitude and can work in a fast-paced environment and manage competing priorities. Experience working in an Agile environment. Ability and willingness to adapt and contribute as needed to ensure the team meets its goals. Nice to have Public Markets Technology

Posted 1 week ago

Apply

5.0 - 10.0 years

20 - 35 Lacs

Pune

Hybrid

Our client is Global IT Service & Consulting Organization Data Software Engineer Location- Pune Notice period: Immediate to 60 days F2F interview on 27th July ,Sunday in Pune location Exp:5 -12 years Skill: Python, Spark, Azure Databricks/GCP/AWS Data Software Engineer - Spark, Python, (AWS, Kafka or Azure Databricks or GCP) Job Description: 5-12 Years of in Big Data & Data related technology experience Expert level understanding of distributed computing principles Expert level knowledge and experience in Apache Spark Hands on programming with Python Proficiency with Hadoop v2, Map Reduce, HDFS, Sqoop Experience with building stream-processing systems, using technologies such as Apache Storm or Spark-Streaming Experience with messaging systems, such as Kafka or RabbitMQ Good understanding of Big Data querying tools, such as Hive, and Impala Experience with integration of data from multiple data sources such as RDBMS (SQL Server, Oracle), ERP, Files Good understanding of SQL queries, joins, stored procedures, relational schemas Experience with NoSQL databases, such as HBase, Cassandra, MongoDB Knowledge of ETL techniques and frameworks Performance tuning of Spark Jobs Experience with native Cloud data services AWS or AZURE Databricks Ability to lead a team efficiently Experience with designing and implementing Big data solutions Practitioner of AGILE methodology

Posted 1 week ago

Apply

3.0 - 8.0 years

12 - 18 Lacs

Bengaluru

Work from Office

Role & responsibilities We are hiring experienced Data Engineers for immediate joining at our Bangalore office. If you have strong hands-on experience in PySpark and Big Data ecosystems, wed like to talk to you. What were looking for: Minimum 3 years of experience in data engineering Strong expertise in PySpark Hands-on experience with Hadoop and Big Data technologies Experience with Azure cloud platform Understanding of Gen AI concepts (preferred, not mandatory) Ability to work in fast-paced environments and deliver quickly Why join us: Immediate joining opportunity Work on enterprise-scale data projects Exposure to latest cloud and AI technologies

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

nagpur, maharashtra

On-site

About the Company: HCL Technologies is a global technology company that helps enterprises reimagine their businesses for the digital age. The company's mission is to deliver innovative solutions that drive business transformation and enhance customer experiences. About the Role: We are looking for a skilled professional to join our team, focusing on technical expertise in various technologies and platforms. Responsibilities: - Python - Azure ADF - API Development - Azure Databricks - CI/CD - DevOps - Terraform - AWS - Big Data To apply for this position, please share your resume at sanchita.mitra@hcltech.com.,

Posted 1 week ago

Apply

6.0 - 10.0 years

0 Lacs

karnataka

On-site

As a Big Data - Data Modeller at our organization, you will be responsible for leading moderately complex initiatives and deliverables within technical domain environments. Your role will involve contributing to large-scale planning of strategies, designing, coding, testing, debugging, and documenting projects and programs associated with the technology domain, including upgrades and deployments. You will also review moderately complex technical challenges that require an in-depth evaluation of technologies and procedures, as well as resolve issues and lead a team to meet client needs while leveraging a solid understanding of the function, policies, procedures, or compliance requirements. Collaboration and consultation with peers, colleagues, and mid-level managers to resolve technical challenges and achieve goals will be a key aspect of your responsibilities. Additionally, you will lead projects, act as an escalation point, provide guidance and direction to less experienced staff, and collaborate with scrum stakeholders to implement modernized and sustainable technology roadmaps. The ideal candidate for this role should possess strong Data Modeling Skills, expertise in Big Data, ETL, Hadoop, Google Big Query, BigQuery, Kafka event streaming, API development, and CI/CD. A minimum of 6 years of hands-on experience in Big Data Software Enterprise Application development, with the use of continuous integration and delivery when developing code, is required. Experience in the Banking/Financial technology domain will be preferred. You must have a good understanding of current and future trends and practices in Technology, and be able to proactively manage risk through the implementation of the right controls and escalate where required. Your responsibilities will also include working with the Engineering manager, product owner, and Team to ensure that the product is delivered with quality, on time, and within budget. Strong verbal and written communication skills are essential, as you will be required to work in a global development environment. This is a full-time position with a day shift schedule, and the work location is in person in Bangalore. The application deadline for this role is 08/08/2025.,

Posted 1 week ago

Apply

2.0 - 6.0 years

0 Lacs

karnataka

On-site

As a part of Goldman Sachs, you will play a crucial role in utilizing your expertise, resources, and innovative ideas to support our clients, shareholders, and the communities we engage with in their growth. Established in 1869, we stand as a prominent global investment banking, securities, and investment management firm with our headquarters based in New York and offices spread across the world. At Goldman Sachs, we strongly believe that your individuality enhances your effectiveness in your role. We are dedicated to promoting diversity and inclusivity both within our organization and externally. We ensure that each member of our team is provided with ample opportunities for professional and personal growth through various training programs, development initiatives, extensive networks, comprehensive benefits, wellness programs, personal finance solutions, and mindfulness programs. To delve deeper into our culture, benefits, and talented individuals, visit GS.com/careers. We are devoted to accommodating candidates with special needs or disabilities during our recruitment process. For more information on accommodations, please visit: https://www.goldmansachs.com/careers/footer/disability-statement.html Engineering holds a pivotal position within our company, driving our businesses forward. The dynamic nature of our work environment demands strategic thinking that is not only innovative but also results in intelligent solutions. At Goldman Sachs, our Engineers are not merely creators - we are enablers of possibilities. By bridging people and capital with novel ideas, we aspire to make a transformative impact. Tackle the most intricate and urgent engineering challenges for our clients, be a part of our engineering teams that design highly scalable software and systems, architect cutting-edge low latency infrastructure solutions, proactively safeguard against cyber threats, and utilize machine learning in conjunction with financial engineering to consistently convert data into actionable insights. Contribute to the creation of new ventures, revolutionize finance, and embrace a realm of opportunities that align with the pace of the markets. Engineering, encompassing our Technology Division and global strategists groups, forms the core of our business, demanding innovative strategic thinking and immediate, practical solutions. Are you ready to explore the boundaries of digital capabilities We are seeking individuals who embody the core values of Goldman Sachs Engineers - individuals who are visionaries and solution finders, crafting innovations in risk management, big data, mobile technology, and beyond. We value creative collaboration, adaptability to change, and thrive in a fast-paced global setting. Join us in shaping the future of engineering at Goldman Sachs.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

maharashtra

On-site

As a professional in Cloud Sales, you will be responsible for building and maintaining strong relationships with customers and partners on platforms such as AWS & GCP throughout the duration of engagements. Your role will involve collaborating closely with diverse teams to support customer acquisition and contribute to the increase of revenue and market share. Additionally, you will play a vital role in supporting the international consulting team by providing valuable insights and helping implement cloud strategies for customers in areas such as AI/ML, Big Data, Analytics, IoT, and Cloud Migration. Your technical expertise should include a minimum of 3+ years of experience in Cloud Sales, primarily focusing on AWS and GCP. You should have a keen interest in consulting with clients, enjoy working on agile projects such as Scrum and Kanban, and have the ability to drive topics forward independently. Strong analytical and conceptual skills, along with excellent self-organization abilities, are essential for success in this role. Effective communication and presentation skills are crucial, and you should be willing to travel within PAN India for work-related purposes. Previous experience in customer-facing roles will be advantageous, and familiarity with sales, communication, and presentation tools is necessary. Any initial knowledge of AWS or AWS certifications will be considered a plus. Proficiency in the English language is also required to effectively fulfill the responsibilities of this position.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

indore, madhya pradesh

On-site

As a Senior Engineer Technical Support, you are expected to have 5 to 8 years of experience in Software and Hardware Infrastructure implementation and support. Your educational background should include a BE (IT/Computers) or MCA degree. The work location for this role is in Indore with the option of working from the office. In this role, you will work in a multi-functional capacity, utilizing your expertise in Software, Hardware, Networking Infrastructure, Storage, Virtualization, Customer Training, and Technical Support. Your responsibilities will include assisting pre-sales teams in creating techno-commercial proposals, estimating pricing, preparing delivery schedules, and designing deployment architectures for ClearTrails products. You will be involved in deploying cutting-edge technologies related to Big Data across hundreds of servers and Peta Byte Scales of Storages. Additionally, you will engage with customers on integration with Telecom Networks, travel to customer locations for implementations, and provide support to resolve product-related issues. Your day-to-day tasks will involve identifying Hardware and Software requirements, staying updated on the latest trends in Infrastructure, designing deployment architectures, networking, storage, virtualization, and network security. You will also be responsible for deploying monitoring tools, using automation for deployments and upgrades, diagnosing customer-reported problems, and collaborating with QA and Engineering teams to resolve issues within SLAs. Furthermore, you will provide technical reviews of documentation, assist with knowledge-sharing initiatives, product release training, and participate in design reviews with Customers. Your role will be pivotal in ensuring successful product implementations and customer satisfaction. Overall, as a Senior Engineer Technical Support, you will play a crucial role in the deployment and maintenance of Hardware and Software Infrastructure, providing technical expertise, support, and solutions to meet customer requirements and business objectives.,

Posted 1 week ago

Apply

4.0 - 8.0 years

0 Lacs

haryana

On-site

As an AWS Glue Python Developer, you will utilize your 4 to 5+ years of relevant experience to design and implement efficient data models, ensuring data accuracy and optimal performance. Your key responsibilities will include developing, maintaining, and optimizing ETL processes to extract, transform, and load data from various sources into our data warehouse. You will be expected to write complex SQL queries, develop and maintain Python scripts and applications, and leverage your deep knowledge of AWS services to build and maintain data pipelines and infrastructure. Experience with Infrastructure as Code (IaC) tools like Terraform or CloudFormation to automate the provisioning and management of AWS resources is a plus. Knowledge of PySpark for big data processing and analysis is desirable, and proficiency in Git and GitHub for version control and collaboration is required. You will be responsible for identifying and implementing optimizations for data processing pipelines, maintaining data quality, collaborating with other teams, and documenting all data engineering processes and projects. Qualifications for this role include at least 4-5 years of experience in data engineering roles, with a strong emphasis on AWS technologies. Proficiency in data modelling, SQL, and Python is essential, along with demonstrated expertise in AWS services, Infrastructure as Code (IaC) tools, and version control using Git and GitHub. Experience with PySpark and big data processing is a plus. Strong problem-solving and communication skills, the ability to work independently and in a team, and a track record of taking ownership of projects and delivering on time are important attributes. The hiring process for this position will involve screening (HR Round), Technical Round 1, Technical Round 2, and Final HR Round. If you are looking for a challenging opportunity to work with cutting-edge technologies in a collaborative environment, this AWS Glue Python Developer role is for you.,

Posted 1 week ago

Apply

5.0 - 9.0 years

0 Lacs

karnataka

On-site

As a Controller at PhonePe Limited, you will play a crucial role in the financial operations of the company. Your responsibilities will include preparing and finalizing monthly book closures, financial statements, and schedules. You will work closely with business finance teams to drive key performance indicator (KPI) improvements. Formalizing standard operating procedures (SOP) for accounting processes and ensuring compliance with necessary accounting treatments will be part of your daily tasks. You will be responsible for preparing reconciliation statements, conducting internal and statutory audits, and submitting schedules with variance analysis to the central controllership team on a monthly basis. Additionally, you will collaborate with the tech team to develop finance requirements related to reporting and operations. Your expertise in MS Office, strong communication skills, and ability to build relationships with stakeholders will be essential for success in this role. The ideal candidate for this position is a Chartered Accountant with 5-6 years of experience in a controllership role, preferably with startup experience. A strong bias for action, problem-solving skills, and ownership mentality are traits that we value. Hands-on experience with automation tools and working with big data will be an added advantage. At PhonePe, we foster a culture that empowers individuals to do their best work. We believe in solving complex problems, executing quickly, and building platforms that impact millions of people. If you are passionate about making a difference, collaborating with talented individuals, and accelerating your career growth, we invite you to join our team. As a full-time employee at PhonePe, you will have access to a comprehensive benefits package, including medical insurance, wellness programs, parental support, mobility benefits, retirement benefits, and other perks such as higher education assistance and salary advance policy. Join us in our mission to offer every Indian an equal opportunity to accelerate their progress by unlocking the flow of money and access to services. To learn more about PhonePe and our work culture, visit our blog.,

Posted 1 week ago

Apply

3.0 - 7.0 years

0 Lacs

haryana

On-site

As a Java with Hadoop Developer at Airlinq in Gurgaon, India, you will play a vital role in collaborating with the Engineering and Development teams to establish and maintain a robust testing and quality program for Airlinq's products and services. Your responsibilities will include but are not limited to: - Being part of a team focused on creating end-to-end IoT solutions using Hadoop to address various industry challenges. - Building quick prototypes and demonstrations to showcase the value of technologies such as IoT, Machine Learning, Cloud, Micro-Services, DevOps, and AI to the management. - Developing reusable components, frameworks, and accelerators to streamline the development cycle of future IoT projects. - Operating effectively with minimal supervision and guidance. - Configuring Cloud platforms for specific use-cases. To excel in this role, you should have a minimum of 3 years of IT experience with at least 2 years dedicated to working with Cloud technologies like AWS or Azure. You must possess expertise in designing and implementing highly scalable enterprise applications and establishing continuous integration environments on the targeted cloud platform. Proficiency in Java, Spring Framework, and strong knowledge of IoT principles, connectivity, security, and data streams are essential. Familiarity with emerging technologies such as Big Data, NoSQL, Machine Learning, AI, and Blockchain is also required. Additionally, you should be adept at utilizing Big Data technologies like Hadoop, Pig, Hive, and Spark, with hands-on experience in any Hadoop platform. Experience in workload migration between on-premise and cloud environments, programming with MapReduce and Spark, as well as Java (core Java), J2EE technologies, Python, Scala, Unix, and Bash Scripts is crucial. Strong analytical, problem-solving, and research skills are necessary, along with the ability to think innovatively and independently. This position requires 3-7 years of relevant work experience and is based in Gurgaon. The ideal educational background includes a B.E./B.Tech., M.E./M. Tech. in Computer Science, Electronics Engineering, or MCA.,

Posted 1 week ago

Apply

15.0 - 21.0 years

0 Lacs

haryana

On-site

The Data Architecture Specialist Join a team of data architects who design and execute industry-relevant reinventions that allow organizations to realize exceptional business value from technology. As a Senior Manager specializing in Data Architecture, you will be based in Bangalore, Mumbai, Pune, or Gurugram with 15 to 21 years of experience. Explore an exciting career at Accenture if you are a problem solver and passionate about tech-driven transformation. Design, build, and implement strategies to enhance business architecture performance in an inclusive, diverse, and collaborative culture. The Technology Strategy & Advisory team helps clients achieve growth and efficiency through innovative R&D transformation, redefining business models using agile methodologies. Collaborate closely with clients to unlock the value of data, architecture, and AI, driving business agility and transformation to a real-time enterprise. As a Data Architecture Consulting professional, your responsibilities include: - Identifying, assessing, and solving complex business problems using in-depth data analysis - Helping clients design, architect, and scale their journey to new technology-driven growth - Enabling architecture transformation from the current state to a to-be enterprise environment - Assisting clients in building capabilities for growth and innovation to sustain high performance Key Requirements: - Present data strategy and technology solutions to drive C-suite/senior leadership level discussions - Utilize in-depth understanding of technologies such as big data, data integration, data governance, data quality, cloud platforms, data modeling tools, data warehouse, and hosting environments - Lead proof of concept implementations and define plans to scale across multiple technology domains - Demonstrate creativity and analytical skills in problem-solving environments - Develop client handling skills to deepen relationships with key stakeholders - Collaborate, work, and motivate diverse teams to achieve goals Experience Requirements: - MBA from a tier 1 institute - Prior experience in assessing Information Strategy Maturity, evaluating new IT potential, defining data-based strategies and establishing Information Architecture landscapes - Designing solutions using cloud platforms like AWS, Azure, GCP, and conceptualizing Data models - Establishing frameworks for effective Data Governance, defining data ownership, standards, policies, and associated processes - Evaluating existing products and frameworks, developing options for proposed solutions - Practical industry expertise in Financial Services, Retail, Telecommunications, Life Sciences, Mining and Resources, or equivalent domains with understanding of key technology trends and business implications.,

Posted 1 week ago

Apply

9.0 - 13.0 years

0 Lacs

chennai, tamil nadu

On-site

The Data Engineer is responsible for creating high-quality data products that support the Banks regulatory requirements and data-driven decision-making. As a Data Engineer, you will lead by example, collaborate closely with customers, and address any obstacles that may arise. Your expertise in data architecture standards, data warehousing, data structures, and business intelligence will play a crucial role in contributing to business outcomes within an agile team. Your key responsibilities will include developing and maintaining scalable, extensible, and highly available data solutions, prioritizing critical business needs while aligning with the broader architectural vision, identifying and mitigating risks in the data supply chain, adhering to and enhancing technical standards, and designing and building analytical data models. To excel in this role, you must possess a First-Class Degree in Engineering/Technology along with 9 to 11 years of experience in implementing data-intensive solutions using agile methodologies. You should have a strong background in relational databases and proficiency in SQL for data querying, transformation, and manipulation. Experience in data modeling for analytical purposes, automating data pipelines, utilizing cloud native technologies, and a willingness to explore new technologies are essential. Effective communication, problem-solving abilities, mentoring skills, and the capacity to independently lead and deliver medium-sized components are also key attributes. In terms of technical skills, you must have hands-on experience in ETL and proficiency in data integration platforms such as Ab Initio, Apache Spark, Talend, or Informatica. Additionally, familiarity with big data platforms like Hadoop, Hive, or Snowflake, expertise in data warehousing concepts, relational and NoSQL database design, data modeling techniques, programming languages like Python, Java, or Scala, and exposure to DevOps practices and data governance principles are crucial. Valuable technical skills include proficiency in Ab Initio, knowledge of public cloud data platforms, data quality and controls, containerization platforms, and working with various file formats. Experience with job schedulers, business intelligence tools, and relevant certifications would be advantageous. This position falls under the Technology Job Family Group within the Digital Software Engineering Job Family and is a full-time role. If you require accommodations due to a disability, please refer to the Accessibility at Citi policy. For more information on Citis EEO Policy Statement and your rights, please review the provided resources.,

Posted 1 week ago

Apply

5.0 - 10.0 years

0 - 1 Lacs

Bengaluru

Hybrid

Who We Are Verve has created a more efficient and privacy-focused way to buy and monetize advertising. Verve is an ecosystem of demand and supply technologies fusing data, media, and technology together to deliver results and growth to both advertisers and publishersno matter the screen or location, no matter who, what, or where a customer is. With 30 offices across the globe and with an eye on servicing forward-thinking advertising customers, Verves solutions are trusted by more than 90 of the United States top 100 advertisers, 4,000 publishers globally, and the worlds top demand-side platforms. Learn more at www.verve.com. About the Role In this role you will work closely with product, engineering and other teams, collaborate with other Data Science team and with the Machine Learning Engineers to engineer prototypes into solutions. Domain In this role your main focus would be on our Audience generation and insight projects Audience generation: ML for embedded targets, audience privacy first approaches, Composite AI agents, etc.. Audience insight - describe audience composition and characteristics Adhoc analysis - support business request to better understand our data assets when data is not readily available from our BI tools Support sales pitch - provided valuable extra insight to augment the value proposition for key accounts Research and Development Our Data Science role includes the following responsibilities: Research and development of cutting edge Machine Learning systems, models, and schemes in many different areas of Adtech Develop real-time algorithms for audience creation and segmentation Discover insights/patterns in our customers from various data sources such as exchange data, behavioural data, location data, 1 and 3rd party data assets Design experiments, oversee A/B testing, evaluate the quality of derived assets and continuously monitor model performance Create proof of concepts and data science prototypes Search and select appropriate data sets Perform statistical analysis and use results to improve models Identify differences in data distribution that could affect model performance in real-world situations Visualize data for deeper insights Analyze the use cases of ML algorithms and ranking them by their success probability Understanding when your findings can be applied to business decisions Reducing business problems into Machine Learning problems and opportunities Verifying data quality Collaborative Work Work closely with product, engineering, and sales teams to drive the use of Data Science across the Verve Group Collaborate with our Exchange Data Science team Collaborate with Machine Learning Engineers to engineer prototypes into solutions What You will bring 8+ years experience working as a Data Scientist in industry (preferably in advertising) is a must A postgraduate degree in a relevant quantitative field (e.g. applied mathematics, statistics, engineering, computer science, physics, operations research, economics, behavioral sciences). PhD is a strong plus, Fluent in statistical analysis, data mining, and machine learning, Proficient in Python, SQL, PySpark, TensorFlow (TfLite), Sklearn, Keras and Pytorch Enthusiasm for solving interesting problems, Willingness to learn and work in a team-oriented and constantly changing environment. Nice to have Previous experience in Audience creation and insight, (Highly imbalanced) classification problems, Data asset identification to feature engineering, Classification and neural network Regularisation, Feature selection, and hyperparameter tuning What we also work with We work with a wide range of tools, technologies and processes. The right candidate should have willingness to learn new technologies. In this role is you will get hands-on experience with many established and emerging technologies: Framework: Metaflow (or similar), Modeling framework: Tensorflow, Keras, pyTorch Hyperparameter tuning framework: HyperOpt, Keras_tuner, .. MlFlow, Technologies: GCP and/or AWS (e.g. logExplorer, Cloud function, Cloud scheduler), BigQuery, DevOps tooling: Airflow (or similar), Grafana (or similar), Jupyterhub (or similar) Other: Challenging constraints (Sparse data assets, Privacy first constraints,, low latency for inferences), Efficient work with Big Data (wrangling, modeling, low latency inference, shuffling issue), Developing data-driven products. WHAT WE OFFER Be part of a multicultural team that is bringing advertising to the next level You will learn and evolve in an empowering environment characterized by entrepreneurial actions Responsibility, independence, and an opportunity to participate in projects that have a significant impact on Verves success 3 Wellness days per year (in Q1, Q2 & Q3) and Employee Assistance Program to help you maintain your well-being Enhance your professional skills with a yearly training budget and improve your language skills through German and/or English classes Work and Travel Program (monthly raffle after 2 years of employment) We are eager to build a great team together and we appreciate your help through our Employee Referral Bonus Align your interests with the company's success and take part in our Employee Shares Purchase Plan You will be entitled to 19 holidays per year in addition to any of the public/bank holidays Personalized Benefits Platform; with a budget of 4100 INR/month, you can choose the benefits that fit you best from the following options: Mobility and travel Entertainment and food Fitness and healthcare Enjoy food and beverage benefits with colleagues and have fun during team events Medical insurance for self and family Verve provides equal employment opportunities to all employees and applicants for employment and prohibits discrimination and harassment of any type without regard to race, color, religion, age, sex, national origin, disability status, genetics, protected veteran status, sexual orientation, gender identity or expression, or any other characteristic protected by federal, state or local laws.

Posted 1 week ago

Apply

3.0 - 8.0 years

30 - 35 Lacs

Bengaluru

Work from Office

This role is for a Senior Data Engineer - AIML, with a strong development background, whose primary objective will be to contribute developing and operationalizing platform services and large-scale machine learning pipelines at global scale. We are seeking for a talented professional with a solid mix of experience in Big Data and AI/ML systems. The ideal candidate for this role will have the ability to learn quickly and deliver solutions within strict deadlines in a fast-paced environment. They should have a passion for optimizing existing solutions and making incremental improvements. Strong interpersonal and effective communication skills, both written and verbal, are essential. Additionally, they should have knowledge of Agile methodologies, common scrum practices and tools. This is a hybrid position. Expectation of days in office will be confirmed by your Hiring Manager. Basic Qualifications 3+ years of relevant work experience and a Bachelors degree, OR 5+ years of relevant work experience Strong development experience in Python and one or more of the following programming languages: Go, Rust, Java/Scala, C/C++ Preferred Qualifications Strong knowledge of Big Data such as Hadoop, Spark, Kafka, Redis, Flink, Airflow and similar technologies Hands on experience with virtualization, containerization and utilizing distributed compute frameworks like Ray etc. Knowledge about DR / HA topologies with hands on experience in implementing the same Hands on experience in building and maintaining data and model engineering pipelines, feature engineering pipelines and comfortable with core ML concepts and frameworks Well versed with common software development tools, DevOps tools and Agile methodologies. Strong interpersonal and effective communication skills both written and verbal Payment domain experience with AIML applications background is a plus

Posted 1 week ago

Apply

4.0 - 9.0 years

16 - 25 Lacs

Navi Mumbai, Bengaluru, Mumbai (All Areas)

Hybrid

Role & responsibilities: Design and implement scalable data pipelines for feature extraction, transformation, and loading (ETL) using technologies such as Pyspark, Scala, and relevant big data frameworks. Govern and optimize data pipelines to ensure high reliability, efficiency, and data quality across on-premise and cloud environments. Collaborate closely with data scientists, ML engineers, and business stakeholders to understand data requirements and translate them into technical solutions. Implement best practices for data governance, metadata management, and compliance with regulatory requirements. Lead a team of data engineers, providing technical guidance, mentorship, and fostering a culture of innovation and collaboration. Stay updated with industry trends and advancements in big data technologies and contribute to the continuous improvement of our data engineering practices. Preferred candidate profile Strong experience in data engineering with hands-on experience in designing and implementing data pipelines. Strong proficiency in programming languages such as Pyspark and Scala, with experience in big data technologies (Cloudera, Hadoop ecosystem). Proven leadership experience in managing and mentoring a team of data engineers. Experience working in a banking or financial services environment is a plus. Excellent communication skills with the ability to collaborate effectively across teams and stakeholders.

Posted 1 week ago

Apply
cta

Start Your Job Search Today

Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.

Job Application AI Bot

Job Application AI Bot

Apply to 20+ Portals in one click

Download Now

Download the Mobile App

Instantly access job listings, apply easily, and track applications.

Featured Companies