Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
Cloud Kinetics is seeking a candidate with expertise in Bigdata, Hadoop, Hive SQLs, Spark, and other tools within the Bigdata Eco System. As a member of our team, you will be responsible for developing code, optimizing queries for performance, setting up environments, ensuring connectivity, and deploying code into production post-testing. Strong functional and technical knowledge is essential to fulfill project requirements, particularly in the context of Banking terminologies. Additionally, you may lead small to medium-sized projects and act as the primary contact for related tasks. Proficiency in DevOps and Agile Development Framework is crucial for this role. In addition to the core requirements, familiarity with Cloud computing, particularly AWS or Azure Cloud Services, is advantageous. The ideal candidate will possess strong problem-solving skills, adaptability to ambiguity, and a quick grasp of new and complex concepts. Experience in collaborating with teams within complex organizational structures is preferred. Knowledge of BI tools like MSTR and Tableau, as well as a solid understanding of object-oriented programming and HDFS concepts, will be beneficial. As a member of the team, your responsibilities will include working as a developer in Bigdata, Hadoop, or Data Warehousing Tools, and Cloud Computing. This entails working on Hadoop, Hive SQLs, Spark, and other tools within the Bigdata Eco System. Furthermore, you will create Scala/Spark jobs for data transformation and aggregation, develop unit tests for Spark transformations and helper methods, and design data processing pipelines to streamline operations. If you are a proactive individual with a strong technical background and a passion for leveraging cutting-edge technologies to drive innovation, we encourage you to apply for this exciting opportunity at Cloud Kinetics.,
Posted 1 month ago
5.0 - 12.0 years
0 Lacs
pune, maharashtra
On-site
Treasury and FP&A Technology is seeking an experienced Testing Sr. Manager specializing in Automation to lead the definition, planning, and execution of testing automation strategies for the Global Funds Transfer Pricing Application. The ideal candidate will be a hands-on leader capable of architecting robust automation frameworks, enhancing testing efficiency, and ensuring the seamless delivery of high-quality software. Expertise in automation tools, agile methodologies, and quality engineering best practices is essential to transform and elevate the current testing automation landscape. Responsibilities: - Define, plan, and execute testing automation strategy for CitiFTP, ensuring continuous monitoring and enhancement of automation coverage. - Design, develop, and implement scalable and maintainable automation frameworks for UI, API, and data validation testing on the Big Data/Hadoop platform. - Collaborate with various teams to integrate automation into the agile Software Development Life Cycle (SDLC). - Enhance regression and end-to-end testing efficiency through automation. - Develop robust test scripts and maintain automation suites to support rapid software releases. - Improve overall test coverage, defect detection, and release quality with automation. - Establish and track key QA metrics such as defect leakage, test execution efficiency, and automation coverage. - Advocate for best practices in test automation, including code reviews, re-usability, and maintainability. - Drive the adoption of AI/ML-based testing tools and emerging trends in test automation. - Manage, mentor, and upskill a team of test engineers in automation practices. - Foster a culture of continuous learning and innovation within the testing community. - Define career development paths and ensure team members stay updated with industry advancements. - Analyze organizational trends to enhance processes and stay informed about industry trends. - Assess risks in business decisions with a focus on safeguarding Citigroup's reputation and complying with regulations. Qualifications: - 12+ years of experience in functional and non-functional software testing. - 5+ years of experience as a Test Automation Lead. - Expertise in test automation frameworks/tools like Jenkins, Selenium, Cucumber, TestNG, Junit, and Cypress. - Strong programming skills in Java, Python, or similar languages. - Proficiency in SQL. - Experience with API testing tools (Postman, RestAssured) and performance testing tools (JMeter, LoadRunner). - Familiarity with build tools like Maven/Gradle, continuous integration tools like Jenkins, and source management tools like Git/GitHub. - Solid knowledge of Agile, Scrum, and DevOps practices. - Strong familiarity with functional test tools (JIRA). - Exposure to cloud-based test execution (AWS, Azure, or GCP) and big data/database testing automation. - Preferred experience with AI-driven test automation and advanced test data management strategies. - Certifications such as ISTQB Advanced, Certified Agile Tester, or Selenium WebDriver certification are desirable. - Exposure to banking/financial domains, particularly Treasury applications, is a plus. - Strong communication, diplomacy, persuasion, and influencing skills. - Hands-on experience in code review, unit testing, and integration testing. - Confident, innovative, self-motivated, aggressive, and results-oriented. - Passion for automation in quality engineering. Education: - Bachelors/University degree, Masters degree preferred. Citi is an equal opportunity and affirmative action employer, encouraging all qualified and interested applicants to apply for career opportunities. For individuals with disabilities requiring accommodations during the application process, review Accessibility at Citi.,
Posted 1 month ago
5.0 - 7.0 years
0 Lacs
Mumbai, Maharashtra, India
On-site
Join our dynamic and high-impact Data team as a Data Engineer, where you&aposll be responsible for safely receiving and storing trading-related data for the India teams, as well as operating and improving our shared data access and data processing systems. This is a critical role in the organisation as the data platform drives a huge range of trader analysis, simulation, reporting and insights. The ideal candidate should have work experience in systems engineering, preferably with prior exposure to financial markets and with proven working knowledge in the fields of Linux administration, orchestration and automation tools, systems hardware architecture as well as storage and data protection technologies. Your Core Responsibilities: Manage and monitor all distributed systems, storage infrastructure, and data processing platforms, including HDFS, Kubernetes, Dremio, and in-house data pipelines Drive heavy focus on systems automation and CI/CD to enable rapid deployment of hardware and software solutions Collaborate closely with systems and network engineers, traders, and developers to support and troubleshoot their queries Stay up to date with the latest technology trends in the industry; propose, evaluate, and implement innovative solutions Your Skills and Experience: 57 years of experience in managing large-scale multi-petabyte data infrastructure in a similar role Advanced knowledge of Linux system administration and internals, with proven ability to troubleshoot issues in Linux environments Deep expertise in at least one of the following technologies: Kafka, Spark, Cassandra/Scylla, or HDFS Strong working knowledge of Docker, Kubernetes, and Helm Experience with data access technologies such as Dremio and Presto Familiarity with workflow orchestration tools like Airflow and Prefect Exposure to cloud platforms such as AWS, GCP, or Azure Proficiency with CI/CD pipelines and version control systems like Git Understanding of best practices in data security and compliance Demonstrated ability to solve problems proactively and creatively with a results-oriented mindset Quick learner with excellent troubleshooting skills High degree of flexibility and adaptability About Us IMC is a global trading firm powered by a cutting-edge research environment and a world-class technology backbone. Since 1989, weve been a stabilizing force in financial markets, providing essential liquidity upon which market participants depend. Across our offices in the US, Europe, Asia Pacific, and India, our talented quant researchers, engineers, traders, and business operations professionals are united by our uniquely collaborative, high-performance culture, and our commitment to giving back. From entering dynamic new markets to embracing disruptive technologies, and from developing an innovative research environment to diversifying our trading strategies, we dare to continuously innovate and collaborate to succeed. Show more Show less
Posted 1 month ago
0.0 years
0 Lacs
, India
On-site
Revenir aux offres Stagiaire Business Analyst Data (LOB25-STA-06) Nature Data Business Analyst Contrat Stage 6 mois Exprience Moins d&apos1 an Lieu de travail Paris / Rgion parisienne A Propos Missions Le stage sinscrit dans le cadre de la mise en place dun SI denvergure pour la collecte et lutilisation des donnes Sociales Nominatives (DSN) pour un organisme du Secteur Public. Nes dune dcision politique pour la simplification des relations entre les entreprises et les organismes sociaux, la Dclaration Sociale Nominative est dsormais largement rependue et utilise par la majorit des entreprises et remplace la majorit des dclarations sociales franaises priodiques ou vnementielles. Les DSN embarquent une richesse mtier importante ainsi quune volumtrie trs consquente, avec des usages trs nombreux : interrogation de donnes en temps rel pour des actions telles que le contrle des entreprise, le calcul de donnes telles que les effectifs et la masse salariale ou lanalyse statistique. Face la richesse de ces donnes, cet organisme a lanc un important projet de refonte de sa brique SI de collecte et dutilisation des DSN dans une architecture BIG DATA. Sous la responsabilit dun Product Owner, vous serez intgr dans une quipe de Business Analyst de 7 personnes et vous interviendrez sur la dfinition et la validation des sprint et des livraisons des Data Engineer. Dans ce cadre, vous serez form et encadr sur les mthodologies de mise en uvre de solution DATA. Descriptif du poste Travaux Assurs Monte en comptence fonctionnelle sur les donnes de la DSN afin dapprhender les enjeux du projet, le primtre de donnes et les cas dusage affrents Apprentissage de la mthodologie agile (Scrum) Participation aux travaux de spcifications et de validation des sprints, avec un enjeu important sur lautomatisation des tests et les tests de non rgression. Dans cette optique, le stagiaire sera amen mettre en place des programmes dautomatisation qui ncessiteront quelques dveloppements. Le stage sadresse donc un profil dsireux dintervenir dans un cadre technico-fonctionnel. Participation aux crmonies agiles et aux travaux de pilotage Vous bnficierez de toute lexpertise de LOBELLIA Conseil sur le volet mtier et sur la conduite de projet agiles. Ce Stage Vous Permettra Dacqurir La vision architecturale dun systme BIG DATA denvergure Un cas pratique de comprhension et dutilisation de donnes denvergure Une vision de la dmarche dun projet DATA multi-quipe en mode agile Les technologies utilises sur les diffrents sujets sont : Suite Hadoop (Hdfs, Oozie, Yarn, Spark, Hive) Accs aux donnes : MobaXterm, Zeppelin, MIT Kerberos, DBeaver Langage de programmation : HQL (simili SQL) + Python Outils de travail : Sharepoint, Redmine, Git, Visual Studio Code, Excel Profil recherch Etudiant en dernire anne dcole dingnieur ou Master 2 scientifique. Qualits requises : Apptence technico-fonctionnelle Qualits rdactionnelles Esprit danalyse Rigueur Sens du service Aisance relationnelle Postuler Ce champs est requis. Ce champs est requis. Ce mail n&aposest pas valide. CV ** Ce champs est requis. Lettre de motivation Vous nous avez connus par... Les rseaux sociaux Un forum ou un vnement cole Une connaissance Autre Champs requis Fichier requis, au format pdf, poids infrieur 5Mo Merci, votre mail a t envoy. Show more Show less
Posted 1 month ago
2.0 - 5.0 years
4 - 7 Lacs
Pune
Work from Office
about our diversity, equity, and inclusion efforts and the networks ZS supports to assist our ZSers in cultivating community spaces, obtaining the resources they need to thrive, and sharing the messages they are passionate about. ZSs Platform Development team designs, implements, tests and supports ZSs ZAIDYN Platform which helps drive superior customer experiences and revenue outcomes through integrated products analytics. Whether writing distributed optimization algorithms or advanced mapping and visualization interfaces, you will have an opportunity to solve challenging problems, make an immediate impact and contribute to bring better health outcomes. What you'll do: As part of our full-stack product engineering team, you will build multi-tenant cloud-based software products/platforms and internal assets that will leverage cutting edge based on the Amazon AWS cloud platform. Pair program, write unit tests, lead code reviews, and collaborate with QA analysts to ensure you develop the highest quality multi-tenant software that can be productized. Work with junior developers to implement large features that are on the cutting edge of Big Data Be a technical leader to your team, and help them improve their technical skills Stand up for engineering practices that ensure quality products: automated testing, unit testing, agile development, continuous integration, code reviews, and technical design Work with product managers and architects to design product architecture and to work on POCs Take immediate responsibility for project deliverables Understand client business issues and design features that meet client needs Undergo on-the-job and formal trainings and certifications, and will constantly advance your knowledge and problem solving skills What you'll bring: 1-3 years of experience in developing software, ideally building SaaS products and services Bachelor's Degree in CS, IT, or related discipline Strong analytic, problem solving, and programming ability Good hands on to work with AWS services (EC2, EMR, S3, Serverless stack, RDS, Sagemaker, IAM, EKS etc) Experience in coding in an object-oriented language such as Python, Java, C# etc. Hands on experience on Apache Spark, EMR, Hadoop, HDFS, or other big data technologies Experience with development on the AWS (Amazon Web Services) platform is preferable Experience in Linux shell or PowerShell scripting is preferable Experience in HTML5, JavaScript, and JavaScript libraries is preferable Good to have Pharma domain understanding Initiative and drive to contribute Excellent organizational and task management skills Strong communication skills Ability to work in global cross-office teams ZS is a global firm; fluency in English is required
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
noida, uttar pradesh
On-site
You will be responsible for leading a migration project from Oracle Cloud to MySQL database (on-premises) with a focus on SQL script and Python migration. Your expertise in on-premises solutions, specifically Linux-based standalone database, and connecting with multiple data sources will be crucial for the success of the project. Your core competencies should include proficiency in Big Data Technologies such as Hadoop (HDFS, YARN), Hive, and PySpark. You should have experience in designing end-to-end data ingestion & ETL pipelines, orchestrating and monitoring them efficiently. Additionally, your skills in cluster management, including primary/secondary cluster configuration, load balancing, and HA setup, will be essential. You should be well-versed in on-premises infrastructure, MySQL databases, and storage & management tools like HDFS and Hive Metastore. Your expertise in programming and scripting languages such as Python, PySpark, and SQL will be required for performing tasks related to performance optimization, Spark tuning, Hive query optimization, and resource management. Experience in building and deploying automated data pipelines in an on-premises environment using CI/CD practices will be advantageous. Furthermore, your knowledge of integrating with logging and monitoring tools, along with experience in security frameworks, will contribute to effective monitoring and governance of the data environment.,
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
hyderabad, telangana
On-site
As a candidate with over 5 years of experience, you will be responsible for Bigdata Testing involving technologies such as Hadoop, HDFS, Hive, Kafka, Spark, SQL, and UNIX. Your expertise in these areas will be crucial in ensuring the efficiency and accuracy of our data testing processes. Your mandatory skills should include proficiency in Bigdata Testing tools such as Hadoop, HDFS, Hive, Kafka, Spark, SQL, and UNIX. Additionally, having a good understanding of these technologies will be advantageous in fulfilling your responsibilities effectively. While working at our Bangalore location, you will be required to undergo a background check process either before onboarding or after onboarding. This process will be facilitated by a designated BGV Agency to ensure compliance and security within the organization. Overall, your role will be pivotal in maintaining the quality and reliability of our data testing procedures, making your expertise in Bigdata Testing technologies essential to our team's success.,
Posted 1 month ago
6.0 - 10.0 years
0 Lacs
karnataka
On-site
At EY, you'll have the chance to build a career as unique as you are, with the global scale, support, inclusive culture and technology to become the best version of you. And we're counting on your unique voice and perspective to help EY become even better, too. Join us and build an exceptional experience for yourself, and a better working world for all. EY's Advisory Services is a unique, industry-focused business unit that provides a broad range of integrated services leveraging deep industry experience with strong functional and technical capabilities and product knowledge. The financial services practice at EY offers integrated advisory services to financial institutions and other capital markets participants. Within EY's Advisory Practice, the Data and Analytics team solves big, complex issues and capitalizes on opportunities to deliver better working outcomes that help expand and safeguard businesses, now and in the future. This way, we help create a compelling business case for embedding the right analytical practice at the heart of clients" decision-making. We're looking for Senior and Manager Big Data Experts with expertise in the Financial Services domain and hands-on experience with the Big Data ecosystem. Expertise in Data engineering, including design and development of big data platforms. Deep understanding of modern data processing technology stacks such as Spark, HBase, and other Hadoop ecosystem technologies. Development using SCALA is a plus. Deep understanding of streaming data architectures and technologies for real-time and low-latency data processing. Experience with agile development methods, including core values, guiding principles, and key agile practices. Understanding of the theory and application of Continuous Integration/Delivery. Experience with NoSQL technologies and a passion for software craftsmanship. Experience in the Financial industry is a plus. Nice to have skills include understanding and familiarity with all Hadoop Ecosystem components, Hadoop Administrative Fundamentals, experience working with NoSQL in data stores like HBase, Cassandra, MongoDB, HDFS, Hive, Impala, schedulers like Airflow, Nifi, experience in Hadoop clustering, and Auto scaling. Developing standardized practices for delivering new products and capabilities using Big Data technologies, including data acquisition, transformation, and analysis. Defining and developing client-specific best practices around data management within a Hadoop environment on Azure cloud. To qualify for the role, you must have a BE/BTech/MCA/MBA degree, a minimum of 3 years hands-on experience in one or more relevant areas, and a total of 6-10 years of industry experience. Ideally, you'll also have experience in Banking and Capital Markets domains. Skills and attributes for success include using an issue-based approach to deliver growth, market and portfolio strategy engagements for corporates, strong communication, presentation and team building skills, experience in producing high-quality reports, papers, and presentations, and experience in executing and managing research and analysis of companies and markets, preferably from a commercial due diligence standpoint. A Team of people with commercial acumen, technical experience, and enthusiasm to learn new things in this fast-moving environment, an opportunity to be a part of a market-leading, multi-disciplinary team of 1400+ professionals, in the only integrated global transaction business worldwide, and opportunities to work with EY Advisory practices globally with leading businesses across a range of industries. Working at EY offers inspiring and meaningful projects, education and coaching alongside practical experience for personal development, support, coaching, and feedback from engaging colleagues, opportunities to develop new skills and progress your career, freedom and flexibility to handle your role in a way that's right for you. EY exists to build a better working world, helping to create long-term value for clients, people, and society and build trust in the capital markets. Enabled by data and technology, diverse EY teams in over 150 countries provide trust through assurance and help clients grow, transform, and operate. Working across assurance, consulting, law, strategy, tax, and transactions, EY teams ask better questions to find new answers for the complex issues facing our world today.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
kochi, kerala
On-site
You will be responsible for big data development and support for production deployed applications, analyzing business and functional requirements for completeness, and developing code with minimum supervision. Working collaboratively with team members, you will ensure accurate and timely communication and delivery of assigned tasks to guarantee the end-products" performance upon release to production. Handling software defects or issues within production timelines and SLA is a key aspect of the role. Your responsibilities will include authoring test cases within a defined testing strategy, participating in test strategy development for Configuration and Custom reports, creating test data, assisting in code merge peer reviews, reporting status and progress to stakeholders, and providing risk assessment throughout development cycles. You should have a strong understanding of system and big data strategies/approaches adopted by IQVIA, stay updated on software applications development industry knowledge, and be open to production support roles within the project. To excel in this role, you should have 5-8 years of overall experience, with at least 2-3 years in Big Data, proficiency in Big Data Technologies such as HDFS, Hive, Pig, Sqoop, HBase, and Oozie, strong experience in SQL Queries and Airflow, familiarity with PSql, CI-CD, Jenkins, and UNIX commands, excellent communication skills, comprehensive skills, good confidence level, proven analytical, logical, and problem-solving techniques. Experience in Spark Application Development, ETL, and ELT tools is preferred. Possessing fine-tuned analytical skills, attention to detail, and the ability to work effectively with colleagues from diverse backgrounds is essential. The minimum educational requirement for this position is a Bachelor's Degree in Information Technology or a related field, along with 5-8 years of development experience or an equivalent combination of education, training, and experience. IQVIA is a leading global provider of clinical research services, commercial insights, and healthcare intelligence, facilitating the acceleration of innovative medical treatments" development and commercialization to enhance patient outcomes and population health worldwide. To learn more, visit https://jobs.iqvia.com.,
Posted 1 month ago
5.0 - 10.0 years
4 - 5 Lacs
Pune
Work from Office
Senior Data Engineer Location: Pune Exp: 5+ Years We re a technology solutions provider dedicated to delivering innovative digital products andservices. Our team of creative thinkers, tech enthusiasts, and strategic experts is transformingbusinesses and enhancing user experiences with cutting-edge technology. We re passionate aboutenabling our partners success, and we invite you to be part of our exciting journey! Responsibilities: Be part of a cross-functional Scrum team. Collaborate closely with other R&D functions. Contribute to new feature development. Provide input on system behaviour to Product Owners (POs) and developers. Support customers and internal teams. Analyse and solve product issues. Must-Have Skills: Minimum 5+ years of experience as a Data Engineer. Strong hands-on experience with Scala. Experience with AWS/Azure cloud services related to data pipeline: EMR S3 Redshift Document DB/MongoDB Spark Streaming Spark HDFS Vinaya Kumbhar Sr.
Posted 1 month ago
3.0 - 8.0 years
0 Lacs
pune, maharashtra
On-site
You should have strong experience in PySpark, Python, Unix scripting, SparkSQL, and Hive. You must be proficient in writing SQL queries, creating views, and possess excellent oral and written communication skills. Prior experience in the Insurance domain would be beneficial. A good understanding of the Hadoop Ecosystem including HDFS, Map Reduce, Pig, Hive, Oozie, and Yarn is required. Knowledge of AWS services such as Glue, AWS S3, Lambda function, Step Function, and EC2 is essential. Experience in data migration from platforms like Hive/S3 to Data Bricks is a plus. You should be able to prioritize, plan, organize, and manage multiple tasks efficiently while delivering high-quality work. As a candidate, you should have 6-8 years of technical experience in PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), with at least 3 years of experience in Big Data/ETL using Python, Spark, and Hive, along with 3+ years of experience in AWS. Your primary key skills should include PySpark, AWS (Glue, EMR, Lambda, Steps functions, S3), and Big Data with Python, Spark, and Hive experience. Exposure to Big Data migration is also important. Secondary key skills that would be beneficial for this role include Informatica BDM/Power center, Data Bricks, and MongoDB.,
Posted 1 month ago
10.0 - 14.0 years
0 Lacs
karnataka
On-site
The ideal candidate for this position should possess a strong expertise in programming/scripting languages and a proven ability to debug challenges across various Operating Systems. A certification in the relevant specialization is required along with a proficiency in using designing and automation tools. In addition, the candidate should have excellent knowledge of CI and agile frameworks. Moreover, the successful candidate must demonstrate strong communication, negotiation, networking, and influencing skills. Stakeholder management and conflict management skills are also essential for this role. The candidate should be proficient in setting up tools/infrastructure, defect metrics, and traceability metrics. A solid understanding of CI practices and agile frameworks is necessary. Furthermore, the candidate should be able to promote a strategic mindset to ensure the use of the right tools and coach and mentor the team to follow best practices. Expertise in Big Data and Hadoop ecosystems is required, along with the ability to build real-time stream-processing systems on large Scala data. Proficiency in data ingestion frameworks/data sources and data structures is also crucial for this role. The profile required for this position includes 10+ years of expertise and hands-on experience in Spark with Scala and Big data technologies. The candidate should have a good working experience in Scala and object-oriented concepts, as well as in HDFS, Spark, Hive, and Oozie. Technical expertise with data models, data mining, and partitioning techniques is also necessary. Additionally, hands-on experience with SQL databases and a good understanding of CI/CD tools such as Maven, Git, Jenkins, and SONAR are required. Knowledge of Kafka and ELK stack is a plus, and familiarity with data visualization tools like PowerBI will be an added advantage. Strong communication and coordination skills with multiple stakeholders are essential, along with the ability to assess existing situations, propose improvements, and follow up on action plans. In conclusion, the ideal candidate should have a professional attitude, be self-motivated, a fast learner, and a team player. The ability to work in international/intercultural environments and interact with onsite stakeholders is crucial for this role. If you are looking to be directly involved, grow in a stimulating and caring environment, feel useful on a daily basis, and develop or strengthen your expertise, you will find a perfect fit in this position.,
Posted 1 month ago
6.0 - 12.0 years
6 - 9 Lacs
Bengaluru
Work from Office
Skill: Hadoop Admin Grade -C2/C1 Location: Pune/Chennai/Bangalore NP: Immediate to 15 Days Joiners Only Execute weekly server rebuilds (21 30 nodes) with zero data loss and minimal performance impact Perform Hadoop-level pre/post validations: cluster health, HDFS usage, replication, skew, and logs Coordinate with Data Center Ops and Unix Admins for hardware and OS-level tasks Reconfigure and reintegrate rebuilt nodes into the cluster Provide weekday and rotational weekend support across BDH1 and BDH4 clusters Required Skills: Strong hands-on experience with Hadoop ecosystem (HDFS, YARN, MapReduce, HBase) Proficient in log analysis, volume/block checks, and skew troubleshooting Familiarity with open-source Hadoop distributions and production change controls Excellent communication and cross-team coordination skills Ability to work independently in a fast-paced, complex environment Hadoop
Posted 1 month ago
4.0 - 9.0 years
10 - 20 Lacs
Coimbatore
Work from Office
Position Name: Data Engineer Location: Coimbatore (Hybrid 3 days per week) Work Shift Timing: 1.30 pm to 10.30 pm (IST) Mandatory Skills: SCALA, Spark, Python, Data bricks Good to have: Java & Hadoop The Role: Designing and building optimized data pipelines using cutting-edge technologies in a cloud environment to drive analytical insights. Constructing infrastructure for efficient ETL processes from various sources and storage systems. Leading the implementation of algorithms and prototypes to transform raw data into useful information. Architecting, designing, and maintaining database pipeline architectures, ensuring readiness for AI/ML transformations. Creating innovative data validation methods and data analysis tools. Ensuring compliance with data governance and security policies. Interpreting data trends and patterns to establish operational alerts. Developing analytical tools, programs, and reporting mechanisms. Conducting complex data analysis and presenting results effectively. Preparing data for prescriptive and predictive modeling. Continuously exploring opportunities to enhance data quality and reliability. Applying strong programming and problem-solving skills to develop scalable solutions. Requirements: Experience in the Big Data technologies (Hadoop, Spark, Nifi, Impala). Hands-on experience designing, building, deploying, testing, maintaining, monitoring, and owning scalable, resilient, and distributed data pipelines. High proficiency in Scala/Java and Spark for applied large-scale data processing. Expertise with big data technologies, including Spark, Data Lake, and Hive. Solid understanding of batch and streaming data processing techniques. Proficient knowledge of the Data Lifecycle Management process, including data collection, access, use, storage, transfer, and deletion. Expert-level ability to write complex, optimized SQL queries across extensive data volumes. Experience on HDFS, Nifi, Kafka. Experience on Apache Ozone, Delta Tables, Databricks, Axon(Kafka), Spring Batch, Oracle DB Familiarity with Agile methodologies. Obsession for service observability, instrumentation, monitoring, and alerting. Knowledge or experience in architectural best practices for building data lakes. Interested candidates share your resume at Neesha1@damcogroup.com along with the below mentioned details : Total Exp : Relevant Exp in Scala & Spark : Current CTC: Expected CTC: Notice period : Current Location:
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
As a Senior Engineer, VP at our Pune location in India, you will be responsible for managing and performing work across various areas of the bank's IT Platform/Infrastructure. Your role will involve analysis, development, and administration, with possible oversight of engineering delivery for specific departments. Your day-to-day tasks will include planning and developing engineering solutions to achieve business goals, ensuring reliability and resiliency in solutions, and promoting maintainability and reusability. You will play a key role in architecting well-integrated solutions and reviewing engineering plans to enhance capability and reusability. You will collaborate with a cross-functional agile delivery team, bringing an innovative approach to software development using the latest technologies and practices to deliver business value efficiently. Your focus will be on fostering a collaborative environment, open code sharing, and supporting all stages of software delivery from analysis to production support. In this role, you will enjoy benefits such as a best-in-class leave policy, gender-neutral parental leaves, sponsorship for industry certifications, employee assistance programs, comprehensive insurance coverage, and health screening. You will be expected to lead engineering efforts, champion best practices, collaborate with stakeholders to achieve business outcomes, and acquire functional knowledge of the business capabilities being digitized. Key Skills required: - GCP Services: Composer, BigQuery, DataProc, GCP Cloud Architecture, etc. - Big Data Hadoop: Hive, HQL, HDFS - Programming: Python, PySpark, SQL Query writing - Scheduler: Control-M or any other scheduler - Experience in Database engines (e.g., SQL Server, Oracle), ETL Pipeline development, Tableau, Looker, and performance tuning - Proficiency in architecture design, technical documentation, and mapping business requirements with technology Desired Skills: - Understanding of Workflow automation and Agile methodology - Terraform Coding and experience in Project Management - Prior experience in Banking/Finance domain and hybrid cloud solutions, preferably using GCP - Product development experience Join us to excel in your career with training, coaching, and continuous learning opportunities. Our culture promotes responsibility, commercial thinking, initiative, and collaboration. We value a positive, fair, and inclusive work environment where we celebrate the successes of our people. Embrace the empowering culture at Deutsche Bank Group and be part of our success together. For more information about our company and teams, please visit our website at https://www.db.com/company/company.htm.,
Posted 1 month ago
5.0 - 8.0 years
27 - 42 Lacs
Bengaluru
Work from Office
Job Summary As a Software Engineer at NetApp India’s R&D division, you will be responsible for the design, development and validation of software for Big Data Engineering across both cloud and on-premises environments. You will be part of a highly skilled technical team named NetApp Active IQ. The Active IQ DataHub platform processes over 10 trillion data points per month that feeds a multi-Petabyte DataLake. The platform is built using Kafka, a serverless platform running on Kubernetes, Spark and various NoSQL databases. This platform enables the use of advanced AI and ML techniques to uncover opportunities to proactively protect and optimize NetApp storage, and then provides the insights and actions to make it happen. We call this “actionable intelligence”. Job Requirements Design and build our Big Data Platform, and understand scale, performance and fault-tolerance • Interact with Active IQ engineering teams across geographies to leverage expertise and contribute to the tech community. • Identify the right tools to deliver product features by performing research, POCs and interacting with various open-source forums • Work on technologies related to NoSQL, SQL and in-memory databases • Conduct code reviews to ensure code quality, consistency and best practices adherence. Technical Skills • Big Data hands-on development experience is required. • Demonstrate up-to-date expertise in Data Engineering, complex data pipeline development. • Design, develop, implement and tune distributed data processing pipelines that process large volumes of data; focusing on scalability, low -latency, and fault-tolerance in every system built. • Awareness of Data Governance (Data Quality, Metadata Management, Security, etc.) • Experience with one or more of Python/Java/Scala. • Knowledge and experience with Kafka, Storm, Druid, Cassandra or Presto is an added advantage. Education • A minimum of 5 years of experience is required. 5-8 years of experience is preferred. • A Bachelor of Science Degree in Electrical Engineering or Computer Science, or a Master Degree; or equivalent experience is required.
Posted 1 month ago
5.0 - 9.0 years
0 Lacs
pune, maharashtra
On-site
The Engineer Intmd Analyst is an intermediate level position responsible for a variety of engineering activities including the design, acquisition and development of hardware, software and network infrastructure in coordination with the Technology team. The overall objective of this role is to ensure quality standards are being met within existing and planned frameworks. Responsibilities: Provide assistance with a product or product component development within the technology domain. Conduct product evaluations with vendors and recommend product customization for integration with systems. Assist with training activities, mentor junior team members and ensure teams adherence to all control and compliance initiatives. Assist with application prototyping and recommend solutions around implementation. Provide third line support to identify the root cause of issues and react to systems and application outages or networking issues. Support projects and provide project status updates to project manager or Sr. Engineer. Partner with development teams to identify engineering requirements and assist with defining application/system requirements and processes. Create installation documentation, training materials, and deliver technical training to support the organization. Appropriately assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients and assets, by driving compliance with applicable laws, rules and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct and business practices, and escalating, managing and reporting control issues with transparency. Qualifications: 5-8 years of relevant experience in an Engineering role. Experience working in Financial Services or a large complex and/or global environment. Involvement in DevOps activities (SRE/LSE Auto Deployment/Self Healing) and Application Support. Tech Stack: Basic - Java/python, Unix, Oracle. Essential Skills: IT experience working in one of Hbase, HDFS, Kafka, Neo4J, Akka, Spark, Storm and GemFire. IT Support experience working in Unix, Cloud & Windows environments. Experience supporting RDBMS DB like MongoD, ORACLE, Sybase, MS SQL & DB2. Supported Applications deployed in Websphere, Weblogic, IIS and Tomcat. Familiar with Autosys and setup. Understanding of client server architecture (clustered and non-clustered). Basic Networking knowledge (Load balancers, Network Protocols). Working knowledge of Lookup Active Directory Protocol(LDAP) and Single Sign On concepts. Service Now expertise. Experience working in Multiple Application Support Model is preferred. Consistently demonstrates clear and concise written and verbal communication. Comprehensive knowledge of design metrics, analytics tools, benchmarking activities and related reporting to identify best practices. Demonstrated analytic/diagnostic skills. Ability to work in a matrix environment and partner with virtual teams. Ability to work independently, prioritize, and take ownership of various parts of a project or initiative. Ability to work under pressure and manage to tight deadlines or unexpected changes in expectations or requirements. Proven track record of operational process change and improvement. Education: Bachelors degree/University degree or equivalent experience.,
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
pune, maharashtra
On-site
At Improzo, we are dedicated to improving life by empowering our customers through quality-led commercial analytical solutions. Our team of experts in commercial data, technology, and operations collaborates to shape the future and work with leading Life Sciences clients. We prioritize customer success and outcomes, embrace agility and innovation, foster respect and collaboration, and are laser-focused on quality-led execution. As a Data and Reporting Developer (Improzo Level - Associate) at Improzo, you will play a crucial role in designing, developing, and maintaining large-scale data processing systems using big data technologies. You will collaborate with data architects and stakeholders to implement data storage solutions, develop ETL pipelines, integrate various data sources, design and build reports, optimize performance, and ensure seamless data flow. Key Responsibilities: - Design, develop, and maintain scalable data pipelines and big data applications using distributed processing frameworks. - Collaborate on data architecture, storage solutions, ETL pipelines, data lakes, and data warehousing. - Integrate data sources into the big data ecosystem while maintaining data quality. - Design and build reports using tools like Power BI, Tableau, and Microstrategy. - Optimize workflows and queries for high performance and scalability. - Collaborate with cross-functional teams to deliver data solutions that meet business requirements. - Perform testing, quality assurance, and documentation of data pipelines. - Participate in agile development processes and stay up-to-date with big data technologies. Qualifications: - Bachelor's or master's degree in a quantitative field. - 1.5+ years of experience in data management or reporting projects with big data technologies. - Hands-on experience or thorough training in AWS, Azure, GCP, Databricks, and Spark. - Experience in Pharma Commercial setting or Pharma data management is advantageous. - Proficiency in Python, SQL, MDM, Tableau, PowerBI, and other tools. - Excellent communication, presentation, and interpersonal skills. - Attention to detail, quality, and client centricity. - Ability to work independently and as part of a cross-functional team. Benefits: - Competitive salary and benefits package. - Opportunity to work on cutting-edge tech projects in the life sciences industry. - Collaborative and supportive work environment. - Opportunities for professional development and growth.,
Posted 1 month ago
3.0 - 5.0 years
3 - 10 Lacs
Bengaluru, Karnataka, India
On-site
Key Responsibilities: A day in the life of an Infoscion As part of the Infosys delivery team your primary role would be to ensure effective Design Development Validation and Support activities to assure that our clients are satisfied with the high levels of service in the technology domain You will gather the requirements and specifications to understand the client requirements in a detailed manner and translate the same into system requirements You will play a key role in the overall estimation of work requirements to provide the right information on project estimations to Technology Leads and Project Managers You would be a key contributor to building efficient programs systems and if you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you If you think you fit right in to help our clients navigate their next in their digital transformation journey this is the place for you Technical Requirements: Min 3 years of experience in Hadoop Administration working in production support projects Cloudera Certified Hadoop Administrator certification Must Experience in Installing configuring maintaining troubleshooting and monitoring Hadoop clusters and below components HDFS HBase Hive Sentry Hue Yarn Sqoop Spark Oozie ZooKeeper Flume Solr Experience in Installing configuring maintaining troubleshooting and monitoring of below Analytical tools and Integrating with Hadoop Datameer Paxata DataRobot H2O MRS Python R Studio SAS Dataiku Bluedata Very good at Job level troubleshooting Yarn Impala and other components Must Experience and Strong Knowledge of Unix Linux scripting Must Experience and knowledge on below tools Talend MySQl Galera Pepperdata Autowatch Netbackup Solix UDeploy RLM Troubleshoot development and production application problems across multiple environments and operating platforms Additional Responsibilities: Knowledge of design principles and fundamentals of architecture Understanding of performance engineering Knowledge of quality processes and estimation techniques Basic understanding of project domain Ability to translate functional nonfunctional requirements to systems requirements Ability to design and code complex programs Ability to write test cases and scenarios based on the specifications Good understanding of SDLC and agile methodologies Awareness of latest technologies and trends Logical thinking and problem solving skills along with an ability to collaborate Preferred Skills: Technology->Big Data - Hadoop->Hadoop,Technology->Big Data - Hadoop->Hadoop Administration->Hadoop
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
kochi, kerala
On-site
The ideal candidate ready to join immediately can share their details via email for quick processing at nitin.patil@ust.com. Act swiftly for immediate attention! With over 5 years of experience, the successful candidate will have the following roles and responsibilities: - Designing, developing, and maintaining scalable data pipelines using Spark (PySpark or Spark with Scala). - Constructing data ingestion and transformation frameworks for both structured and unstructured data sources. - Collaborating with data analysts, data scientists, and business stakeholders to comprehend requirements and deliver reliable data solutions. - Handling large volumes of data while ensuring quality, integrity, and consistency. - Optimizing data workflows for enhanced performance, scalability, and cost efficiency on cloud platforms such as AWS, Azure, or GCP. - Implementing data quality checks and automation for ETL/ELT pipelines. - Monitoring and troubleshooting data issues in production environments and conducting root cause analysis. - Documenting technical processes, system designs, and operational procedures. Key Skills Required: - Minimum 3 years of experience as a Data Engineer or in a similar role. - Proficiency with PySpark or Spark using Scala. - Strong grasp of SQL for data querying and transformation purposes. - Previous experience working with any cloud platform (AWS, Azure, or GCP). - Sound understanding of data warehousing concepts and big data architecture. - Familiarity with version control systems like Git. Desired Skills: - Exposure to data orchestration tools such as Apache Airflow, Databricks Workflows, or equivalent. - Knowledge of Delta Lake, HDFS, or Kafka. - Familiarity with containerization tools like Docker/Kubernetes. - Experience with CI/CD practices and familiarity with DevOps principles. - Understanding of data governance, security, and compliance standards.,
Posted 1 month ago
8.0 - 11.0 years
35 - 37 Lacs
Kolkata, Ahmedabad, Bengaluru
Work from Office
Dear Candidate, We are looking for a Big Data Developer to build and maintain scalable data processing systems. The ideal candidate will have experience handling large datasets and working with distributed computing frameworks. Key Responsibilities: Design and develop data pipelines using Hadoop, Spark, or Flink. Optimize big data applications for performance and reliability. Integrate various structured and unstructured data sources. Work with data scientists and analysts to prepare datasets. Ensure data quality, security, and lineage across platforms. Required Skills & Qualifications: Experience with Hadoop ecosystem (HDFS, Hive, Pig) and Apache Spark. Proficiency in Java, Scala, or Python. Familiarity with data ingestion tools (Kafka, Sqoop, NiFi). Strong understanding of distributed computing principles. Knowledge of cloud-based big data services (e.g., EMR, Dataproc, HDInsight). Note: If interested, please share your updated resume and preferred time for a discussion. If shortlisted, our HR team will contact you. Kandi Srinivasa Delivery Manager Integra Technologies
Posted 1 month ago
2.0 - 6.0 years
0 Lacs
indore, madhya pradesh
On-site
At ClearTrail, you will be part of a team dedicated to developing solutions that empower those focused on ensuring the safety of individuals, locations, and communities. For over 23 years, ClearTrail has been a trusted partner of law enforcement and federal agencies worldwide, committed to safeguarding nations and enhancing lives. We are leading the way in the future of intelligence gathering through the creation of innovative artificial intelligence and machine learning-based lawful interception and communication analytics solutions aimed at addressing the world's most complex challenges. We are currently looking for a Big Data Java Developer to join our team in Indore with 2-4 years of experience. As a Big Data Java Developer at ClearTrail, your responsibilities will include: - Designing and developing high-performance, scalable applications using Java and big data technologies. - Building and maintaining efficient data pipelines for processing large volumes of structured and unstructured data. - Developing microservices, APIs, and distributed systems. - Experience working with Spark, HDFS, Ceph, Solr/Elasticsearch, Kafka, and Delta Lake. - Mentoring and guiding junior team members. If you are a problem-solver with strong analytical skills, excellent verbal and written communication abilities, and a passion for developing cutting-edge solutions, we invite you to join our team at ClearTrail and be part of our mission to make the world a safer place.,
Posted 1 month ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As an experienced professional with 3-5 years in the field, you will be responsible for handling various technical tasks related to Azure Data Factory, Talend/SISS, MSSQL, Azure, and MySQL. Your expertise in Azure Data Factory will be crucial in this role. Your primary responsibilities will include demonstrating advanced knowledge of Azure SQL DB & Synapse Analytics, Power BI, SSIS, SSRS, T-SQL, and Logic Apps. Your ability to analyze and comprehend complex data sets will play a key role in your daily tasks. Proficiency in Azure Data Lake and other Azure services such as Analysis Service, SQL Databases, Azure DevOps, and CI/CD will be essential for success in this role. Additionally, a solid understanding of master data management, data warehousing, and business intelligence architecture will be required. You will be expected to have experience in data modeling and database design, with a strong grasp of SQL Server best practices. Effective communication skills, both verbal and written, will be necessary for interacting with stakeholders at all levels. A clear understanding of the data warehouse lifecycle will be beneficial, as you will be involved in preparing design documents, unit test plans, and code review reports. Experience working in an Agile environment, particularly with methodologies like Scrum, Lean, or Kanban, will be advantageous. Knowledge of big data technologies such as the Spark Framework, NoSQL, Azure Data Bricks, and the Hadoop Ecosystem (Hive, Impala, HDFS) would be a valuable asset in this role.,
Posted 1 month ago
7.0 - 12.0 years
35 - 50 Lacs
Hyderabad
Work from Office
Job Description: Spark, Java Strong SQL writing skills, data discovery, data profiling, Data exploration, Data wrangling skills Kafka, AWS s3, lake formation, Athena, glue, Autosys or similar tools, FastAPI (secondary) Strong SQL skills to support data analysis and imbedded business logic in SQL, data profiling and gap assessment Collaborate with development and business SMEs within technology to understand data requirements, perform data analysis to support and Validate business logic, data integrity and data quality rules within a centralized data platform Experience working within the banking/financial services industry with solid understanding of financial products and business processes
Posted 1 month ago
7.0 - 10.0 years
8 - 12 Lacs
Bengaluru
Work from Office
Education Qualification : BE/B Tech Requirement : Immediate or Max 15 days Job Description : Big Data Developer (Hadoop/Spark/Kafka) - This role is ideal for an experienced Big Data developer who is confident in taking complete ownership of the software development life cycle - from requirement gathering to final deployment. - The candidate will be responsible for engaging with stakeholders to understand the use cases, translating them into functional and technical specifications (FSD & TSD), and implementing scalable, efficient big data solutions. - A key part of this role involves working across multiple projects, coordinating with QA/support engineers for test case preparation, and ensuring deliverables meet high-quality standards. - Strong analytical skills are necessary for writing and validating SQL queries, along with developing optimized code for data processing workflows. - The ideal candidate should also be capable of writing unit tests and maintaining documentation to ensure code quality and maintainability. - The role requires hands-on experience with the Hadoop ecosystem, particularly Spark (including Spark Streaming), Hive, Kafka, and Shell scripting. - Experience with workflow schedulers like Airflow is a plus, and working knowledge of cloud platforms (AWS, Azure, GCP) is beneficial. - Familiarity with Agile methodologies will help in collaborating effectively in a fast-paced team environment. - Job scheduling and automation via shell scripts, and the ability to optimize performance and resource usage in a distributed system, are critical. - Prior experience in performance tuning and writing production-grade code will be valued. - The candidate must demonstrate strong communication skills to effectively coordinate with business users, developers, and testers, and to manage dependencies across teams. Key Skills Required : Must Have : - Hadoop, Spark (core & streaming), Hive, Kafka, Shell Scripting, SQL, TSD/FSD documentation. Good to Have : - Airflow, Scala, Cloud (AWS/Azure/GCP), Agile methodology.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |