Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
6.0 years
0 Lacs
Pune, Maharashtra, India
On-site
At Capgemini Engineering, the world leader in engineering services, we bring together a global team of engineers, scientists, and architects to help the world’s most innovative companies unleash their potential. From autonomous cars to life-saving robots, our digital and software technology experts think outside the box as they provide unique R&D and engineering services across all industries. Join us for a career full of opportunities. Where you can make a difference. Where no two days are the same. Your Role As a senior software engineer with Capgemini, you will have 6 + years of experience in Azure technology with strong project track record In this role you will play a key role in: Strong customer orientation, decision making, problem solving, communication and presentation skills Very good judgement skills and ability to shape compelling solutions and solve unstructured problems with assumptions Very good collaboration skills and ability to interact with multi-cultural and multi-functional teams spread across geographies Strong executive presence and entrepreneurial spirit Superb leadership and team building skills with ability to build consensus and achieve goals through collaboration rather than direct line authority Your Profile Experience with Azure Data Bricks, Data Factory Experience with Azure Data components such as Azure SQL Database, Azure SQL Warehouse, SYNAPSE Analytics Experience in Python/Pyspark/Scala/Hive Programming Experience with Azure Databricks/ADB is must have Experience with building CI/CD pipelines in Data environments Capgemini is a global business and technology transformation partner, helping organizations to accelerate their dual transition to a digital and sustainable world, while creating tangible impact for enterprises and society. It is a responsible and diverse group of 340,000 team members in more than 50 countries. With its strong over 55-year heritage, Capgemini is trusted by its clients to unlock the value of technology to address the entire breadth of their business needs. It delivers end-to-end services and solutions leveraging strengths from strategy and design to engineering, all fueled by its market leading capabilities in AI, generative AI, cloud and data, combined with its deep industry expertise and partner ecosystem. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Andhra Pradesh, India
On-site
We are looking for a PySpark solutions developer and data engineer who can design and build solutions for one of our Fortune 500 Client programs, which aims towards building a data standardized and curation needs on Hadoop cluster. This is high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights, and integrate with the customers critical systems. Key Responsibilities Ability to design, build and unit test applications on Spark framework on Python. Build PySpark based applications for both batch and streaming requirements, which will require in-depth knowledge on majority of Hadoop and NoSQL databases as well. Develop and execute data pipeline testing processes and validate business rules and policies. Build integrated solutions leveraging Unix shell scripting, RDBMS, Hive, HDFS File System, HDFS File Types, HDFS compression codec. Create and maintain integration and regression testing framework on Jenkins integrated with Bit Bucket and/or GIT repositories. Participate in the agile development process, and document and communicate issues and bugs relative to data standards in scrum meetings. Work collaboratively with onsite and offshore team. Develop & review technical documentation for artifacts delivered. Ability to solve complex data-driven scenarios and triage towards defects and production issues. Ability to learn-unlearn-relearn concepts with an open and analytical mindset. Participate in code release and production deployment. Preferred Qualifications BE/B.Tech/ B.Sc. in Computer Science/ Statistics from an accredited college or university. Minimum 3 years of extensive experience in design, build and deployment of PySpark-based applications. Expertise in handling complex large-scale Big Data environments preferably (20Tb+). Minimum 3 years of experience in the following: HIVE, YARN, HDFS. Hands-on experience writing complex SQL queries, exporting, and importing large amounts of data using utilities. Ability to build abstracted, modularized reusable code components. Prior experience on ETL tools preferably Informatica PowerCenter is advantageous. Able to quickly adapt and learn. Able to jump into an ambiguous situation and take the lead on resolution. Able to communicate and coordinate across various teams. Are comfortable tackling new challenges and new ways of working Are ready to move from traditional methods and adapt into agile ones Comfortable challenging your peers and leadership team. Can prove yourself quickly and decisively. Excellent communication skills and Good Customer Centricity. Strong Target & High Solution Orientation. Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Andhra Pradesh, India
On-site
We are looking for a PySpark solutions developer and data engineer who can design and build solutions for one of our Fortune 500 Client programs, which aims towards building a data standardized and curation needs on Hadoop cluster. This is high visibility, fast-paced key initiative will integrate data across internal and external sources, provide analytical insights, and integrate with the customers critical systems. Key Responsibilities Ability to design, build and unit test applications on Spark framework on Python. Build PySpark based applications for both batch and streaming requirements, which will require in-depth knowledge on majority of Hadoop and NoSQL databases as well. Develop and execute data pipeline testing processes and validate business rules and policies. Optimize performance of the built Spark applications in Hadoop using configurations around Spark Context, Spark-SQL, Data Frame, and Pair RDD's. Optimize performance for data access requirements by choosing the appropriate native Hadoop file formats (Avro, Parquet, ORC etc) and compression codec respectively. Build integrated solutions leveraging Unix shell scripting, RDBMS, Hive, HDFS File System, HDFS File Types, HDFS compression codec. Build data tokenization libraries and integrate with Hive & Spark for column-level obfuscation. Experience in processing large amounts of structured and unstructured data, including integrating data from multiple sources. Create and maintain integration and regression testing framework on Jenkins integrated with Bit Bucket and/or GIT repositories. Participate in the agile development process, and document and communicate issues and bugs relative to data standards in scrum meetings. Work collaboratively with onsite and offshore team. Develop & review technical documentation for artifacts delivered. Ability to solve complex data-driven scenarios and triage towards defects and production issues. Ability to learn-unlearn-relearn concepts with an open and analytical mindset. Participate in code release and production deployment. Challenge and inspire team members to achieve business results in a fast paced and quickly changing environment. Preferred Qualifications BE/B.Tech/ B.Sc. in Computer Science/ Statistics from an accredited college or university. Minimum 3 years of extensive experience in design, build and deployment of PySpark-based applications. Expertise in handling complex large-scale Big Data environments preferably (20Tb+). Minimum 3 years of experience in the following: HIVE, YARN, HDFS. Hands-on experience writing complex SQL queries, exporting, and importing large amounts of data using utilities. Ability to build abstracted, modularized reusable code components. Prior experience on ETL tools preferably Informatica PowerCenter is advantageous. Able to quickly adapt and learn. Able to jump into an ambiguous situation and take the lead on resolution. Able to communicate and coordinate across various teams. Are comfortable tackling new challenges and new ways of working Are ready to move from traditional methods and adapt into agile ones Comfortable challenging your peers and leadership team. Can prove yourself quickly and decisively. Excellent communication skills and Good Customer Centricity. Strong Target & High Solution Orientation. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Description The AOP (Analytics Operations and Programs) team is responsible for creating core analytics, insight generation and science capabilities for ROW Ops. We develop scalable analytics applications, AI/ML products and research models to optimize operation processes. You will work with Product Managers, Data Engineers, Data Scientists, Research Scientists, Applied Scientists and Business Intelligence Engineers using rigorous quantitative approaches to ensure high quality data/science products for our customers around the world. We are looking for a Sr.Data Scientist to join our growing Science Team. As Data Scientist, you are able to use a range of science methodologies to solve challenging business problems when the solution is unclear. You will be responsible for building ML models to solve complex business problems and test them in production environment. The scope of role includes defining the charter for the project and proposing solutions which align with org's priorities and production constraints but still create impact. You will achieve this by leveraging strong leadership and communication skills, data science skills and by acquiring domain knowledge pertaining to the delivery operations systems. You will provide ML thought leadership to technical and business leaders, and possess ability to think strategically about business, product, and technical challenges. You will also be expected to contribute to the science community by participating in science reviews and publishing in internal or external ML conferences. Our Team Solves a Broad Range Of Problems That Can Be Scaled Across ROW (Rest Of The World Including Countries Like India, Australia, Singapore, MENA And LATAM). Here Is a Glimpse Of The Problems That This Team Deals With On a Regular Basis Using live package and truck signals to adjust truck capacities in real-time HOTW models for Last Mile Channel Allocation Using LLMs to automate analytical processes and insight generation Ops research to optimize middle mile truck routes Working with global partner science teams to affect Reinforcement Learning based pricing models and estimating Shipments Per Route for $MM savings Deep Learning models to synthesize attributes of addresses Abuse detection models to reduce network losses Key job responsibilities Use machine learning and analytical techniques to create scalable solutions for business problems Analyze and extract relevant information from large amounts of Amazon’s historical business data to help automate and optimize key processes Design, develop, evaluate and deploy, innovative and highly scalable ML/OR models Work closely with other science and engineering teams to drive real-time model implementations Work closely with Ops/Product partners to identify problems and propose machine learning solutions Establish scalable, efficient, automated processes for large scale data analyses, model development, model validation and model maintenance Work proactively with engineering teams and product managers to evangelize new algorithms and drive the implementation of large-scale complex ML models in production Leading projects and mentoring other scientists, engineers in the use of ML techniques Basic Qualifications 5+ years of data scientist experience Experience with data scripting languages (e.g. SQL, Python, R etc.) or statistical/mathematical software (e.g. R, SAS, or Matlab) Experience with statistical models e.g. multinomial logistic regression Experience in data applications using large scale distributed systems (e.g., EMR, Spark, Elasticsearch, Hadoop, Pig, and Hive) Experience working with data engineers and business intelligence engineers collaboratively Demonstrated expertise in a wide range of ML techniques Preferred Qualifications Experience as a leader and mentor on a data science team Master's degree in a quantitative field such as statistics, mathematics, data science, business analytics, economics, finance, engineering, or computer science Expertise in Reinforcement Learning and Gen AI is preferred Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - Amazon Development Centre (India) Private Limited Job ID: A3003385 Show more Show less
Posted 1 week ago
175.0 years
0 Lacs
Gurugram, Haryana, India
On-site
At American Express, our culture is built on a 175-year history of innovation, shared values and Leadership Behaviors, and an unwavering commitment to back our customers, communities, and colleagues. As part of Team Amex, you'll experience this powerful backing with comprehensive support for your holistic well-being and many opportunities to learn new skills, develop as a leader, and grow your career. Here, your voice and ideas matter, your work makes an impact, and together, you will help us define the future of American Express. How will you make an impact in this role? The Digital Data Strategy Team within the broader EDEA (Enterprise Digital Experimentation & Analytics) in EDDS supports all other EDEA VP teams and product & marketing partner teams with data strategy, automation & insights and creates and manages automated insight packs and multiple derived data layers. The team partners with Technology to enable end to end MIS Automation, ODL(Organized Data Layer) creation, drives process automation, optimization, Data & MIS Quality in an efficient manner. The team also supports strategic Data & Platform initiatives. This role will report to the Manager – Digital Data Strategy, EDEA and will be based in Gurgaon. The candidate will be responsible for delivery of high impactful data and automated insights products to enable other analytics partners, marketing partners and product owners to optimize across our platform, demand generation, acquisition and membership experience domains. Your responsibilities include: Elevate Data Intelligence: Set vision for Intuitive, integrated and intelligent frameworks to enable smart Insights. Discover new sources of information for strong enrichment of business applications. Modernization: Keep up with the latest industry research and emerging technologies to ensure we are appropriately leveraging new techniques and capabilities and drive strategic change in tools & capabilities. Develop roadmap to transition our analytical and production usecases to the cloud platform and develop next generation MIS products through modern full stack BI tools & enable self-serve analytics Define digital data strategy vision as the business owner of digital analytics data & partner to achieve the vision of Data as a Service to enable Unified, Scalable & Secure data assets for business applications Strong understanding of key drivers & dynamics of Digital Data, Data Architecture & Design, Data Linkage & Usages. In depth knowledge of platforms like Big Data/Cornerstone, Lumi/Google Cloud Platform, Data Ingestion and Organized Data Layers. Being abreast of the latest industry & enterprise wide data governance, data quality practices, privacy policies and engrain the same in all data products & capabilities and be a guiding light for broader team. Partner and collaborate with multiple partners, agency & colleagues to develop Capabilities that will help in maximizing demand generation program ROI. Minimum Qualifications 1-3 years with relevant experience in the Automation, Data Product Management/Data Strategy with adequate data quality, economies of scale and process governance Proven thought leadership, Solid project management skills, strong communication, collaboration, relationship and conflict management skills Bachelors or Master’s degree in Engineering/Management Knowledge of Big Data oriented tools (e.g. Big query, Hive, SQL, Python/R, PySpark); Advanced Excel/VBA and PowerPoint; Experience of managing complex processes and integration with upstream and downstream systems/processes. Hands on experience on visualization tools like Tableau, Power BI, Sisense etc. Preferred Qualifications Strong analytical/conceptual thinking competence to solve unstructured and complex business problems and articulate key findings to leaders/partners in a succinct and concise manner. Strong understanding of internal platforms like Big Data/Cornerstone, Lumi/Google Cloud Platform. Knowledge of Agile tools and methodologies Enterprise Leadership Behaviors: Set the Agenda: Define What Winning Looks Like, Put Enterprise Thinking First, Lead with an External Perspective Bring Others with You: Build the Best Team, Seek & Provide Coaching Feedback, Make Collaboration Essential Do It the Right Way: Communicate Frequently, Candidly & Clearly, Make Decisions Quickly & Effectively, Live the Blue Box Values, Great Leadership Demands Courage We back you with benefits that support your holistic well-being so you can be and deliver your best. This means caring for you and your loved ones' physical, financial, and mental health, as well as providing the flexibility you need to thrive personally and professionally: Competitive base salaries Bonus incentives Support for financial-well-being and retirement Comprehensive medical, dental, vision, life insurance, and disability benefits (depending on location) Flexible working model with hybrid, onsite or virtual arrangements depending on role and business need Generous paid parental leave policies (depending on your location) Free access to global on-site wellness centers staffed with nurses and doctors (depending on location) Free and confidential counseling support through our Healthy Minds program Career development and training opportunities American Express is an equal opportunity employer and makes employment decisions without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, veteran status, disability status, age, or any other status protected by law. Offer of employment with American Express is conditioned upon the successful completion of a background verification check, subject to applicable laws and regulations. Show more Show less
Posted 1 week ago
0 years
0 Lacs
Anupgarh, Rajasthan, India
On-site
글로벌 콘텐츠 플랫폼 기업, SpoonLabs와 함께 하실 Data Engineer 를 지금 찾고 있어요 | 오디오(Spoon)와 비디오(Vigloo)를 아우르는 콘텐츠 플랫폼으로의 더 큰 도약을 위해, 스푼라디오에서 스푼랩스로 사명을 변경하였습니다 | 🧑🤝🧑 [SpoonLabs Data Tech 팀을 소개합니다] Data Tech 팀은 이런 일을 해요! 스푼과 비글루에서 발생하는 음성, 텍스트, 이미지, 영상 등 모든 데이터를 활용하는 팀입니다. 한국, 일본, 대만, 영미국, 아랍권 국가에서 발생하는 전세계 데이터를 활용한 다양한 유형의 규모가 큰 데이터 처리 뿐만 아니라 데이터 기반의 의사 결정 지원을 위한 시각화 서비스 및 ML / AI 를 활용한 추천 시스템 및 서비스 품질을 개선하기 위한 어뷰징 시스템 개발에 이르기까지 데이터가 필요한 다양한 영역에서 활용하고 있습니다. Data Tech 팀은 이런 문화로 일해요! 다양한 아이디어 및 신기술에 대해 적극 공유하고 의견을 나눕니다. 이렇게 모인 아이디어를 기반으로 도전적인 자세로서 PoC를 진행합니다. 처음 마주한 문제점 혹은 질문이 있다면 언제든 질문하고 빠르게 답변 받을 수 있습니다. 설계된 Architecture 혹은 개발된 코드는 팀 리뷰 혹은 온라인 리뷰를 통해 더욱 더 건설적인 피드백을 받을 수 있습니다. 여기에 팀의 생산성을 높이기 위한 상용 AI 서비스를 아낌없이 지원합니다. Data Tech 팀은 문제 해결 중심의 사고방식을 지향합니다. 다만, 개인 보다는 팀으로서 서로가 잘하는 부분을 함께 활용하여 더 큰 시너지를 낼 수 있는 방법을 찾으려고 합니다. 그 해결방법이 굳이 ML이 아니어도 됩니다. 빠른 문제 해결이 가능하다면 ML 을 비롯한 다양한 방법을 함께 찾고, 해결하려고 노력합니다. 또한, 다양한 부서와의 커뮤니케이션을 통해 활용 가능한 신기술 혹은 데이터를 선제적으로 사용해보고 실제 구현된 Demo를 제시합니다. Data Tech 팀은 이런 기술 스택을 사용해요! Hadoop Ecosystem Hive Spark Airflow Metabase Jupyter PostgreSQL / MariaDB Grafana / Alert Manager Ranger [AWS] GLUE EC2 S3 EKS EMR Open Search (Elastic Search) [Language] Python Java or Kotlin SQL 스푼랩스가 만드는 글로벌 콘텐츠 플랫폼 Spoon & Vigloo! 오디오부터 비디오까지, 스푼랩스는 전 세계 사람들의 일상을 재밌는 콘텐츠로 가득 채우고 있습니다. 크리에이터의 오디오 라이브 콘텐츠 플랫폼, Spoon - https://www.spooncast.net/kr 2분의 몰입을 선사하는 숏폼 드라마 플랫폼, Vigloo - https://www.vigloo.com/ko 💼 [주요 업무 - 주로 이런 업무를 수행해요] Big-data 기반 OpenSource(s)가 설치된 On-Premise Cluster를 관리 및 운용합니다 AWS Cloud 기반의 환경에서 Glue EC2 EKS 등을 활용하여 데이터 수집 및 처리 서비스를 운용하고 관리합니다 서비스에서 발생한 Database Log 및 3rd-Party Service에서 수집합니다 수집된 다양한 유형의 데이터(정형 비정형 반정형)를 전처리하여 정형화 및 표준화합니다 정형화된 데이터를 기반으로 SQL Query를 활용하여 타 부서에 데이터를 제공 및 시각화된 대시보드를 제공합니다 📌 [자격 요건 - 이런 분과 함께하고 싶어요] 5년 이상 데이터 엔지니어 업무 경력 Hadoop ecosystem 기반의 Open Source 설치 및 운영 경험 Spark 기반의 ETL 처리 경험 Airflow 기반의 스케쥴링 및 DAG 구성 경험 Ansi-SQL 기반의 쿼리 작성 및 고급 함수 사용 경험 Python Java Kotlin 등 프로그래밍 언어 개발 경험 데이터 파이프라인 운영 및 모니터링 등의 경험 ➕ [우대 사항 - 이런 분은 더욱 반가워요] Glue EC2 EKS 등 AWS 서비스 경험이 있으신 분 쿼리 실행 계획 및 로그 등을 통한 ETL 성능 최적화 경험 ML Engineer와의 개발 및 협업 경험 개발 비개발 직군을 모두 포함하여 문제점 논의 및 해결 방안에 대한 설명 등 원활한 커뮤니케이션 경험 스푼 서비스를 적극적으로 사용해보신 분 📑 [제출 서류 - 지원자님을 알기 위해서는 다음 서류가 필요해요] 이력서(필수 제출) 이력서 외 추가로 공유하고 싶은 자료가 있다면 함께 제출하셔도 무방합니다. 🎯 [채용 전형 - 다음 과정을 거쳐 스푼랩스에 합류하게 돼요] 서류 전형 > 1차 직무 인터뷰 전형 > 2차 컬처핏 & 3차 경영진 인터뷰 전형 > 레퍼런스 체크 전형 > 처우 협의 > 최종 합격 및 입사 1차 직무 인터뷰 전형 스푼랩스 Development Group 실무진과의 직무 인터뷰를 진행합니다. 대면으로 진행되며, 예상 소요시간은 약 1시간 30분입니다. 2차 컬처핏 인터뷰 전형 스푼랩스 EX팀(인사팀)과 컬처핏 인터뷰를 진행합니다. 대면으로 진행되며, 예상 소요시간은 약 1시간입니다. 2차 인터뷰가 끝나고 잠깐 휴식을 하신 이후, 3차 인터뷰가 바로 진행됩니다. (2, 3차 인터뷰는 하루에 연이어 진행됩니다.) 3차 경영진 인터뷰 전형 스푼랩스 개발 그룹 리드와 경영진 인터뷰를 진행합니다. 대면으로 진행되며, 예상 소요시간은 약 1시간입니다. 레퍼런스 체크 전형 > 처우 협의 > 최종 합격 및 입사 상황에 따라 채용 절차가 생략 혹은 추가될 수 있습니다. (과제 전형/코딩 테스트/커피챗/추가 인터뷰 등) 이력서 및 제출 서류에 허위 사실이 발견되거나 근무 이력 중 징계사항이 확인될 경우, 채용이 취소될 수 있습니다. 스푼랩스 취업규칙 제10조(채용결격)에 따라 결격사유에 해당하는 자는 채용이 취소될 수 있습니다. 👀 [스푼랩스는 어떻게 일하나요? 여기에 답이 있어요] 우리는 더 빠르게, 더 치열하게, 더 단단하게. 완벽보다 속도, 완성보다 실행. 스푼랩스는 빠르게 시도하고, 실패는 안고 다시 달립니다. 결국 답에 닿을 때까지, 우리는 계속 몰입합니다. 반짝이는 아이디어, 밤을 새워 몰입했던 순간, 깨달음에서 전율을 느끼던 날들. 속도는 빠르고, 기준은 높고, 불확실성은 큽니다. 누군가에게 이곳은 버거울 수 있지만, 우리는 그 안에서 치열하게 성장하며 매일 조금씩 더 나아갑니다. 더 멀리, 더 빠르게 나아가기 위해 어깨를 맞대되 각자의 빛을 잃지 않고, 앞서가되 독주하지 않습니다. 함께할 때 더 단단하기에, 우리는 서로를 믿고 배우며, 겸손하게 성장합니다. 원하는 게 ‘편한 일’인지, ‘치열한 성장’인지 스스로에게 물어보세요. 우리는 이미 ‘치열한 성장’을 선택했습니다. 단순한 일이 아닌, 내 인생과 세상을 바꾸는 전력질주를 위한 무대. 몰입과 끈기로 성장할 준비가 된 사람을 환영합니다. 스푼랩스 문화 블로그 스푼랩스 테크 블로그 스푼랩스 링크드인 스푼랩스 채용 사이트 🌱 [몰입하고, 성장할 수 있도록 이런 제도가 준비되어 있어요] [성장을 위한 제도] 끊임없이 도전하고 더 나은 결과를 만들어내는 구성원을 위해 월 10만원 한도의 자기계발비 지원 월 20만원 한도의 일본어, 영어, 한국어 외국어 학습비 지원 AWS re:Invent, Digital Marketing Summit, MAU Conference 등 업무 관련 국내외 교육 및 세미나 참석 지원 우리 같이 공부해요! 사내 스터디 모임 지원 사내 도서관 운영 및 신청 도서 구매 입사자와 추천자 모두 후한 보상을! 사내 직원 추천 제도 우리는 빠르게 배우고, 더 나은 방향으로 스스로 성장하는 사람들이 함께 모인 팀입니다. [일하는 방식] 몰입과 실행의 밀도를 높이기 위해 해외 법인에서 근무하는 오피스 익스체인지 프로그램 창의적 몰입을 위한 워케이션 프로그램 오전 8시~10시 30분 사이 자유롭게 출근하는 자율출근제 열심히 일하고 안전하게 퇴근해야죠. 야근 식비 및 택시비 지원 월요일은 4시간의 몰입을! 더욱 집중해서 몰입하는 주 4.5일제 우리는 더 깊이, 함께 몰입할 수 있는 환경을 선택합니다. [함께하는 팀을 위한 환경] 치열하게 일한 만큼, 서로를 챙기기 위해 함께해 주셔서 감사해요! 근속 기간별 리프레시 휴가 및 휴가비 지원 생일을 진심으로 축하합니다. 생일자를 위한 반반차 휴가 개개인의 일상에도 진심을 담아, 경조 휴가 및 경조비 지원 개인 근무 일정에 맞게 알아서 사용하는 휴가/반반차 제도 든든히 드세요. 아침 식사 제공 및 점심 식비 지원 무엇보다 건강이 우선이죠. 연 1회 종합건강검진 제공 강남역 역세권의 깔끔하고 세련된 사무실 에너지는 항상 충전되어야 하니까. 무제한 카페테리아 운영 힘들 땐 잠시 쉬어요. 고급 안마의자, 게임기, 다트, 탁구대 구비 치열하게 일한 만큼, 재충전도 중요하니까. 우리는 일하는 순간뿐 아니라, 그 사이사이도 함께 고민합니다. 채용에 대해 궁금한 점이 있다면? 아래 메일로 문의 주세요! 스푼랩스 채용: recruit@spoonlabs.com 주식회사 스푼랩스는 채용 ATS 그리팅의 개인정보 처리방침에 따라 개인정보를 수집 및 이용하고 있습니다. Show more Show less
Posted 1 week ago
2.0 years
0 Lacs
Bengaluru East, Karnataka, India
On-site
Visa is a world leader in payments and technology, with over 259 billion payments transactions flowing safely between consumers, merchants, financial institutions, and government entities in more than 200 countries and territories each year. Our mission is to connect the world through the most innovative, convenient, reliable, and secure payments network, enabling individuals, businesses, and economies to thrive while driven by a common purpose – to uplift everyone, everywhere by being the best way to pay and be paid. Make an impact with a purpose-driven industry leader. Join us today and experience Life at Visa. Job Description Functional Summary The GTM Optimization and Business Health team has a simple mission: we turn massive amounts of data into robust tools and actionable insights that drive business value, ensure ecosystem integrity, and provide best in class experience to our money movement clients. Our team is working to build consolidated, strategic and scalable analytics and monitoring infrastructure for commercial and money movement products. Responsibilities The Process Optimization Analyst will create risk, rules, and performance monitoring dashboards and alerting tools and will use these to monitor transactions in near real time, investigate alerts and anomalous events, and partner with internal teams to investigate and manage incidents from end-to-end. Specific activities may include: Develop monitoring and alerting tools from real-time data feeds to monitor for performance drops, risk and fraud events, and rules violations Monitor near real time alerting tools and investigate and generate incidents for risk events and out of pattern activity Manage a caseload to ensure appropriate investigation and resolution of identified risk and performance events Drive to understand the root problems, define analytical objectives and formalize data requirements for various types of dashboards and analyses Design and launch robust and intuitive dashboards supporting best in class money movement client experience Create and present analytic deliverables to colleagues in the analytics team, other internal stakeholders with varying degrees of analytical and technical expertise Distill massive amounts of data across disparate data sources into efficient functional data repositories in a Big Data environment Independently perform analysis to derive insights and render robust, thoughtful results Partner with Visa Direct and money movement teams across multiple areas of the business to understand their data and reporting needs Compare client performance against industry best practices with a shrewd eye toward identifying performance and/or profitability improvement opportunity Develop presentations of complex data and content for clients in an accurate, understandable, and engaging manner This is a hybrid position. Expectation of days in office will be confirmed by your Hiring Manager. Qualifications Basic Qualifications: • 3 or more years of relevant work experience with a Bachelor’s Degree or at least 2 years of work experience with an Advanced degree (e.g. Masters, MBA, JD, MD) or 0 years of work experience with a PhD Preferred Qualifications: • 3 or more years of work experience with a Bachelor’s Degree or 2 or more years of relevant experience with an Advanced Degree (e.g. Masters, MBA, JD, MD) or up to 1 years of relevant experience with a PhD • Experience monitoring real-time data and following incident management workflows • Familiarity with Microsoft Dynamics or other ERP/CRM tools • Proficiency in Tableau and experience with best in class data visualization • Experience with Elasticsearch and Kibana dashboard and alerting • High level of proficiency manipulating data from a variety of sources - Big data skills (Hadoop, Hive, Spark) and/or SQL skills required • Strong verbal, written, and interpersonal skills • Proficient in all MS Office applications with advanced Excel spreadsheet skills • Functional knowledge of programming languages such as Python, Java, and/or Shell Scripting • Strong strategic thinking, problem-solving, and decision-making abilities, with the ability to translate complex data into actionable insights • Visa experience or knowledge of the payments industry Additional Information Visa is an EEO Employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, national origin, sexual orientation, gender identity, disability or protected veteran status. Visa will also consider for employment qualified applicants with criminal histories in a manner consistent with EEOC guidelines and applicable local law. Show more Show less
Posted 1 week ago
5.0 - 8.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the role The Data Analyst in the GRP team will be responsible to analyse complex datasets and make it consumable using visual storytelling and visualization tools such as reports and dashboards built using approved tools (Tableau, Microstrategy, PyDash). The ideal candidate will have a strong analytical mindset, excellent communication skills, and a deep understanding of reporting tools front end and back end You will be responsible for - Driving Data analysis for testing key business hypothesis and asks, developing complex visualizations, self-service tools and cockpits for answering recurring business asks and measurements - Experience in handling quick turnaround business requests, managing stakeholder communication and solving business asks holistically going beyond the basic stakeholder asks - Ability to select the right tools and techniques for solving the problem in hand - Ensuring analysis, tools/ dashboards are developed with the right technical rigor meeting Tesco technical standards - Applied experience in handling large data-systems and datasets - Extensive experience in handling high volume, time pressured business asks and ad-hocs requests - Ability to develop production ready visualization solutions and automated reports - Contribute to development of knowledge assets and reusable modules on GitHub/Wiki- Come up with new ideas and analysis to support business priorities and solve business problems You will need 5-8 years of experience as a Data Analyst, with experience working in domains like retail, cpg and for one of the following functional areas – Finacne, marketing, supply chain, customer, merchandising preferred - Proven track record of handling ad-hoc analysis, developing dashboards and visualizations based business asks. - Strong usage of business understanding for analysis asks. - Exposure to analysis work within Retail domain; Space, Range, Merchandising, Store Ops, Forecasting, Customer Insights, Digital, Marketing will be preferred- Expert Skills to analyze large datasets using Adv Excel, Adv SQL, Hive, Phython, - Expert Skills to develop visualizations, self-service dashboards and reports using Tableau & PowerBi, - Statistical Concepts (Correlation Analysis and Hyp. Testing), Strong DW concepts (Hadoop, Teradata), - Excellent analytical and problem-solving skills. - Should be comfortable dealing with variability- Strong communication and interpersonal skills. Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
It's fun to work in a company where people truly BELIEVE in what they are doing! We're committed to bringing passion and customer focus to the business. Job Description Data Engineer (AWS) Role : DE Engineer (AWS) Experience : 3-5 years (3+ years of experience with AWS cloud) Education : BE/B. Tech/M. Tech Location : Bangalore/India We are currently seeking an experienced Data Support Engineer with a focus on AWS, Snowflake, Hadoop, Spark, and Python to join our Support team. The ideal candidate will have a solid technical background, strong problem-solving skills, and hands-on experience in troubleshooting and supporting data engineering systems. Responsibilities Include Hands-on experience with Hadoop, Spark with Python on AWS. Provide technical support for data engineering systems, addressing user queries, and resolving issues related to data pipelines, AWS services, Snowflake, Hadoop, and Spark. Investigate and troubleshoot issues in data pipelines, identifying root causes and implementing solutions to prevent recurrence. Experience with a range of big data architectures like Hadoop,Spark,Kafka,Hive or other big data technologies. Effectively manage and resolve incidents related to data processing, ensuring minimal downtime and optimal system performance. Collaborate with cross-functional teams to prioritize and address critical issues promptly. Experience in Tuning and Optimizing Spark jobs Knowledge on Terraform templates for Infrastructure provisioning on AWS (or cloud formation templates) Possess minimum 3+ years of in BI/DW development experience with Data Model Architecture/Design. Should have a good understanding of functional programming concepts. Good knowledge of Python with experience of production grade Python projects. Continuous Integration, branching and merging, pair programming, code reviews, unit testing, agile methodologies (Scrum), Design Patterns. Knowledge on CI/CD implementation like AWS Code Commit, Code Deploy for CI/CD pipelines (Git knowledge preferable) Knowledge on Scheduling Tools and techniques on Hadoop/EMR. Excellent written and verbal communication skills. Strong analytical and project management skills. Technical Essentials Proven experience in providing technical support for data engineering systems. Strong understanding of AWS services, including S3, Glue, Redshift, EMR, Lambda, Athena, and Step Functions. Hands-on experience supporting Snowflake, Hadoop, Spark, and Python in a production environment. Familiarity with data modeling, optimization, and performance tuning. Excellent problem-solving skills and the ability to analyze and diagnose complex technical issues. Experience with incident management, including prioritization and resolution procedures. Strong communication and collaboration skills for working with cross-functional teams. Knowledge of best practices in cloud-based data engineering and support. Preferred AWS Certified Solutions Architect – Associate Personal Specifications Self-motivated team player with strong analytical, relationship management with effective written and oral communication skills. If you like wild growth and working with happy, enthusiastic over-achievers, you'll enjoy your career with us! Not the right fit? Let us know you're interested in a future opportunity by clicking Introduce Yourself in the top-right corner of the page or create an account to set up email alerts as new job postings become available that meet your interest! Show more Show less
Posted 1 week ago
4.0 - 8.0 years
12 - 30 Lacs
Hyderabad
Work from Office
Strong Linux and Strong AWS experience Strong active directory Manage Hadoop clusters on Linux, Active Directory integration Collaborate with data science team on project delivery using Splunk & Spark Exp. managing BigData clusters in Production
Posted 1 week ago
4.0 years
0 Lacs
India
On-site
Job Title: Data Analyst (Python +Pyspark) About Us “Capco, a Wipro company, is a global technology and management consulting firm. Awarded with Consultancy of the year in the British Bank Award and has been ranked Top 100 Best Companies for Women in India 2022 by Avtar & Seramount . With our presence across 32 cities across globe, we support 100+ clients across banking, financial and Energy sectors. We are recognized for our deep transformation execution and delivery. WHY JOIN CAPCO? You will work on engaging projects with the largest international and local banks, insurance companies, payment service providers and other key players in the industry. The projects that will transform the financial services industry. MAKE AN IMPACT Innovative thinking, delivery excellence and thought leadership to help our clients transform their business. Together with our clients and industry partners, we deliver disruptive work that is changing energy and financial services. #BEYOURSELFATWORK Capco has a tolerant, open culture that values diversity, inclusivity, and creativity. CAREER ADVANCEMENT With no forced hierarchy at Capco, everyone has the opportunity to grow as we grow, taking their career into their own hands. DIVERSITY & INCLUSION We believe that diversity of people and perspective gives us a competitive advantage. Job Description Role: Data Analyst / Senior Data Analyst Location : Bangalore/ Pune Responsibilities Define and obtain source data required to successfully deliver insights and use cases Determine the data mapping required to join multiple data sets together across multiple sources Create methods to highlight and report data inconsistencies, allowing users to review and provide feedback on Propose suitable data migration sets to the relevant stakeholders Assist teams with processing the data migration sets as required Assist with the planning, tracking and coordination of the data migration team and with the migration run-book and the scope for each customer Role Requirements Strong Data Analyst with Financial Services experience, Knowledge of and experience using data models and data dictionaries in a Banking and Financial Markets context "Knowledge of one or more of the following domains (including market data vendors): Party/Client Trade Settlements Payments Instrument and pricing Market and/or Credit Risk" Demonstrate a continual desire to implement “strategic” or “optimal” solutions and where possible, avoid workarounds or short term tactical solutions Working with stakeholders to ensure that negative customer and business impacts are avoided Manage stakeholder expectations and ensure that robust communication and escalation mechanisms are in place across the project portfolio Good understanding of the control requirement surrounding data handling Experience/Skillset Must have - Excellent analytical skills and commercial acumen, Minimum 4+ years of experience with Python and Pyspark. Good understanding of the control requirements surrounding data handling Experience of big data programmes preferable Strong verbal and written communication skills Strong self-starter with strong change delivery skills who enjoys the challenge of delivering change within tight deadlines Ability to manage multiple priorities Business analysis skills, defining and understanding requirements Knowledge of and experience using data models and data dictionaries in a Banking and Financial Markets context Can write SQL queries and navigate data bases especially Hive, CMD, Putty, Note++ Enthusiastic and energetic problem solver to join an ambitious team Good knowledge of SDLC and formal Agile processes, a bias towards TDD and a willingness to test products as part of the delivery cycle Ability to communicate effectively in a multi-programme environment across a range of stakeholders Attention to detail Good to have - Preferable knowledge and experience in Data Quality & Governance For Spark Scala - should have working experience using scala (preferable) or java for spark For Senior DAs: proven track record of managing small delivery-focussed data teams [09:07] Mishra, Aditi Show more Show less
Posted 1 week ago
8.0 - 12.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Happiest Minds Technologies Pvt.Ltd is looking for Sr Data and ML Engineer to join our dynamic team and embark on a rewarding career journey Liaising with coworkers and clients to elucidate the requirements for each task. Conceptualizing and generating infrastructure that allows big data to be accessed and analyzed. Reformulating existing frameworks to optimize their functioning. Testing such structures to ensure that they are fit for use. Preparing raw data for manipulation by data scientists. Detecting and correcting errors in your work. Ensuring that your work remains backed up and readily accessible to relevant coworkers. Remaining up-to-date with industry standards and technological advancements that will improve the quality of your outputs. Spark ML Lib,Scala,Python,Databricks on AWS, Snowflake, GitLab, Jenkins, AWS DevOps CI/CD pipeline, Machine Learning, Airflow
Posted 1 week ago
3.0 - 6.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Data Engineering Pipeline Development Design implement and maintain ETL processes using ADF and ADB Create and manage views in ADB and SQL for efficient data access Optimize SQL queries for large datasets and high performance Conduct end-to-end testing and impact analysis on data pipelines Optimization Performance Tuning Identify and resolve bottlenecks in data processing Optimize SQL queries and Delta Tables for fast data processing Data Sharing Integration Implement Delta Share, SQL Endpoints, and other data sharing methods Use Delta Tables for efficient data sharing and processing API Integration Development Integrate external systems through Databricks Notebooks and build scalable solutions Experience in building APIs (Good to have) Collaboration Documentation Collaborate with teams to understand requirements and design solutions Provide documentation for data processes and architectures
Posted 1 week ago
2.0 - 9.0 years
10 - 14 Lacs
Bengaluru
Work from Office
Design, develop, and maintain scalable and efficient Python applications using frameworks like FastAPI or Flask Develop, test, and deploy RESTful APIs to interact with front-end services Integrate and establish connections between various relational and non-relational databases (eg, SQL Alchemy, MySQL, PostgreSQL, MongoDB, etc) Solid understanding of relational and NoSQL databases and the ability to establish and manage connections from Python applications Write clean, maintainable, and efficient code, following coding standards and best practices Leverage AWS cloud services for deploying and managing applications (eg, EC2, Lambda, RDS, S3, etc) Troubleshoot and resolve software defects, performance issues, and scalability challenges
Posted 1 week ago
4.0 - 9.0 years
12 - 16 Lacs
Bengaluru
Work from Office
Build, deploy, and maintain machine learning models in production. Automate model training, evaluation, and monitoring pipelines. Collaborate with data engineers to ensure the availability of clean, high-quality data. Optimize model performance and computational efficiency. Document ML workflows and processes for scalability and reproducibility. Key Skills: Proficiency in Python, Scala, or Java. Experience with ML tools like TensorFlow, PyTorch, and MLflow. Familiarity with MLOps practices and tools like Docker, Kubernetes, and CI/CD pipelines. Strong problem-solving and analytical skills Machine Learning,Python,Scala,Java
Posted 1 week ago
5.0 - 8.0 years
10 - 14 Lacs
Bengaluru
Work from Office
BS or higher degree in Computer Science (or equivalent field) 3-6+ years of programming experience with Java and Python Strong in writing SQL queries and understanding of Kafka, Scala, Spark/Flink Exposure to AWS Lambda, AWS Cloud Watch, Step Functions, EC2, Cloud Formation, Jenkins
Posted 1 week ago
3.0 - 7.0 years
6 - 10 Lacs
Bengaluru
Work from Office
Skills required : Bigdata Workflows (ETL/ELT), Python hands-on, SQL hands-on, Any Cloud (GCP BigQuery preferred), Airflow (good knowledge on Airflow features, operators, scheduling etc) NOTE Candidate will be having the coding test (Python and SQL) in the interview process. This would be done through coders-pad. Panel would set it at run-time.
Posted 1 week ago
4.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
PySpark Python SQL Strong focus on big data processing which is core to data engineering AWS Cloud Services Lambda Glue S3 IAMIndicates working with cloud based data pipelines Airflow GitHub Essential for orchestration and version control in data workflows
Posted 1 week ago
10.0 - 15.0 years
8 - 13 Lacs
Bengaluru
Work from Office
Good knowledge on Broadcast eco-system and content processing elements including workflows Hands-on in BMS, Traffic and Playout One globally renowed OEM product in each area Good Knowledge on dealing with currency data, Reports from Neilsen/BARC Good understanding of Sales function in broadcast including Traffic and currency, Affiliate, Non-Linear distribution Has worked/certified on cloud with experience on running and porting media systems into cloud. Knowledge on OEM products dealing with DAM/MAM/CMS Should have good understanding of content processing flow including pre-prod, prod and distribution. Good exposure to emerging technologies like Data Analytics and Gen AI in solving practical industry problems. Experience on content processing elements, streaming standards and protocols is an advantage. JD For Media Consultant Engages with customer and brings in value through prolific Solutioning. Be the domain consultant and act as bridge between customer and the delivery teams. Translate business requirements into clear and concise functional specifications and solutions for technical teams. Propose innovative and practical solutions to address market and business challenges. Work and develop relationships with partners, working with them to create market-led solutions. Constantly be on the lookout for ways to create solutions that deliver better value to the customers. Work with BDM and plan sales strategies in response to market and key accounts. Take ownership of opportunities and preparation of response to RFP/RFI or ad-hoc requirements working with other stake holders.
Posted 1 week ago
2.0 - 5.0 years
3 - 7 Lacs
Bengaluru
Work from Office
Experience in designing and developing data pipelines in a modern data stack (Snowflake, AWS, Airflow,DBT etc) Strong experience on Python Over 2+ years of experience in Snowflake and DBT Able to work in afternoon shift and front end the customer independently so that he/she possess strong communication Strong knowledge in Python, DBT, Snowflake, Airflow Ability to manage both structured and unstructured data Work with multiple data sources (APIs, Databases, S3, et) Own design, documentation, and lifecycle management of data pipelines Help implement the CI/CD processes and release engineering for organizations data pipelines Experience in designing and developing CI/CD processes and managing release management for data pipelines Proficient in Python, SQL, Airflow, AWS, Bitbucket, working with APIs and other types of data sources Good to have knowledge in Salesforce Primary skills : AWS Cloud, Snowflake DW, Azure SQL, SQL, Python (Must Have) DBT( Must Have)
Posted 1 week ago
4.0 - 7.0 years
5 - 9 Lacs
Bengaluru
Work from Office
PySpark, Python, SQL Strong focus on big data processing,which is core to data engineering. AWS Cloud Services (Lambda, Glue, S3, IAM) Indicates working with cloud-based data pipelines. Airflow, GitHub Essential for orchestration and version control in data workflows.
Posted 1 week ago
5.0 - 12.0 years
20 - 25 Lacs
Kolkata, Mumbai, New Delhi
Work from Office
You have an entrepreneurial spirit You enjoy working as a part of well-knit teams You value the team over the individual You welcome diversity at work and within the greater community You aren't afraid to take risks You appreciate a growth path with your leadership team that journeys how you can grow inside and outside of the organization You thrive upon continuing education programs that your company sponsors to strengthen your skills and for you to become a thought leader ahead of the industry curve You are excited about creating change because your skills can help the greater good of every customer, industry and community We are hiring a talented GCP Lead Solution Architect (Data Migration, LSHC) Looking for someone who can drive the solution design, architecture meetings with client and mentor the team, who can lead the team, with GCP as the expertise
Posted 1 week ago
1.0 - 4.0 years
6 - 10 Lacs
Mumbai, Mumbai Suburban
Work from Office
Own, execute and drive the CRM campaigns (Push Notifications, Email, SMS, in app & Browser Notifications, Whatsapp ) to drive channel revenue and visits. KEY DELIVERABLES: Creation, testing and delivery of campaigns for Push and Browser Notifications, email, SMS & other own media channels. CRM Channel planning for Push Notifications, Email, SMS, Browser Notification Identifying and driving improvement projects for CTR, and campaign efficiency Co-ordination with creative team to get copy and creatives done as per schedule Create automated campaigns by building workflows, data rules and the data creation on redshift, creating schemas and workflows on Campaign Management Platform. Build workflows to create and maintain reports for campaign performance. DESIRABLE SKILLS: Essential Attributes Teamwork, Communication and Interpersonal Skills, Analytical Skills, Dependability and a Strong Work Ethic, Adaptability and Flexibility Data handling on Excel and preferably on Redshift Experience in category/marketing planning and execution. Desired Attributes Understanding about Email Marketing, Push Notifications and other CRM channels Basic understanding about segmentation & marketing
Posted 1 week ago
0 years
0 Lacs
Pune, Maharashtra, India
On-site
Looking for Immediate Joiners. We are primarily looking for a candidate as soon as possible with strong proficiency in the following areas: SQL, PL/SQL, and Oracle Query Development – Solid hands-on experience in writing efficient and optimized queries for both Oracle and SQL Server databases. DAX and MDX – Good proficiency in writing DAX (Data Analysis Expressions) for Power BI and MDX (Multidimensional Expressions) for SSAS. ETL & Reporting using MSBI Stack – Experience in developing, deploying, and maintaining solutions using: SSIS (SQL Server Integration Services) SSAS (SQL Server Analysis Services) SSRS (SQL Server Reporting Services) Power BI The candidate should be capable of integrating these tools with Oracle and Hadoop ecosystems (through Spark and Hive) . Agile Practices & Ceremonies – Familiarity with Agile delivery frameworks and tools such as Rally or JIRA . ITSM Processes – Experience in handling incidents, changes, and problem management through BMC Remedy . Support Tasks – Willingness to take on L1 and L2 support responsibilities related to the above platforms and solutions. Domain Knowledge – Understanding of the Payments domain is a plus. Azure Data Services – Hands-on experience with Azure data services (good to have). Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Gurugram, Haryana, India
On-site
Role Description: Sr. Data Engineer – Big Data The ideal candidate is a hands-on technology developer with experience in developing scalable applications and platforms. They must be at ease working in an agile environment with little supervision. The person should be a self-motivated person with a passion for problem solving and continuous learning. Role and responsibilities •Strong technical, analytical, and problem-solving skills •Strong organizational skills, with the ability to work autonomously as well as in a team-based environment • Data pipeline framework development Technical skills requirements The candidate must demonstrate proficiency in, •CDH On-premise for data processing and extraction •Ability to own and deliver on large, multi-faceted projects •Fluency in complex SQL and experience with RDBMSs • Project Experience in CDH experience, Spark, PySpark, Scala, Python, NiFi, Hive, NoSql DBs) Experience designing and building big data pipelines Experience working on large scale, distributed systems \ •Strong hands-on experience of programming language like PySpark, Scala with Spark, Python. Certification in Hadoop/Big Data – Hortonworks/Cloudera •Unix or Shell scripting •Strong delivery background across the delivery of high-value, business-facing technical projects in major organizations •Experience of managing client delivery teams, ideally coming from a Data Engineering / Data Science environment Job Types: Full-time, Permanent Benefits: Health insurance Provident Fund Schedule: Day shift Ability to commute/relocate: Gurugram, Haryana: Reliably commute or planning to relocate before starting work (Required) Application Question(s): Are you serving notice period at your current organization? Education: Bachelor's (Required) Experience: Python: 3 years (Required) Work Location: In person Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.
These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.
The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.
Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.
Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.
As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.