Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 13.0 years
15 - 25 Lacs
Bengaluru
Hybrid
We are looking for an enthusiastic and technology-proficient Big Data Engineer, who is eager to participate in the design and implementation of a top-notch Big Data solution to be deployed at massive scale. Our customer is one of the world's largest technology companies based in Silicon Valley with operations all over the world. On this project we are working on the bleeding-edge of Big Data technology to develop high performance data analytics platform, which handles petabytes datasets. Essential functions Participate in design and development of Big Data analytical applications. Design, support and continuously enhance the project code base, continuous integration pipeline, etc. Write complex ETL processes and frameworks for analytics and data management. Implement large-scale near real-time streaming data processing pipelines. Work inside the team of industry experts on the cutting edge Big Data technologies to develop solutions for deployment at massive scale. Qualifications Strong coding experience with Scala, Spark,Hive, Hadoop. In-depth knowledge of Hadoop and Spark, experience with data mining and stream processing technologies (Kafka, Spark Streaming, Akka Streams). Understanding of the best practices in data quality and quality engineering. Experience with version control systems, Git in particular. Desire and ability for quick learning of new tools and technologies. Would be a plus Knowledge of Unix-based operating systems (bash/ssh/ps/grep etc.). Experience with Github-based development processes. Experience with JVM build systems (SBT, Maven, Gradle). We offer Opportunity to work on bleeding-edge projects Work with a highly motivated and dedicated team Competitive salary Flexible schedule Benefits package - medical insurance, sports Corporate social events Professional development opportunities Well-equipped office About us Grid Dynamics (NASDAQ: GDYN) is a leading provider of technology consulting, platform and product engineering, AI, and advanced analytics services. Fusing technical vision with business acumen, we solve the most pressing technical challenges and enable positive business outcomes for enterprise companies undergoing business transformation. A key differentiator for Grid Dynamics is our 8 years of experience and leadership in enterprise AI, supported by profound expertise and ongoing investment in data, analytics, cloud & DevOps, application modernization and customer experience. Founded in 2006, Grid Dynamics is headquartered in Silicon Valley with offices across the Americas, Europe, and India.
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Experience: 5+ Years Role Overview: Responsible for designing, building, and maintaining scalable data pipelines and architectures. This role requires expertise in SQL, ETL frameworks, big data technologies, cloud services, and programming languages to ensure efficient data processing, storage, and integration across systems. Requirements: • Minimum 5+ years of experience as a Data Engineer or similar data-related role. • Strong proficiency in SQL for querying databases and performing data transformations. • Experience with data pipeline frameworks (e.g., Apache Airflow, Luigi, or custom-built solutions). • Proficiency in at least one programming language such as Python, Java, or Scala for data processing tasks. • Experience with cloud-based data services and Datalakes (e.g., Snowflake, Databricks, AWS S3, GCP BigQuery, or Azure Data Lake). • Familiarity with big data technologies (e.g., Hadoop, Spark, Kafka). • Experience with ETL tools (e.g., Talend, Apache NiFi, SSIS, etc.) and data integration techniques. • Knowledge of data warehousing concepts and database design principles. • Good understanding of NoSQL and Big Data Technologies like MongoDB, Cassandra, Spark, Hadoop, Hive, • Experience with data modeling and schema design for OLAP and OLTP systems. • Familiarity with containerization and orchestration tools (e.g., Docker, Kubernetes). Educational Qualification: Bachelor’s/Master’s degree in computer science, Information Technology, or a related field. Show more Show less
Posted 1 week ago
3.0 - 8.0 years
5 - 15 Lacs
New Delhi, Hyderabad, Gurugram
Work from Office
Primary Skill – Hadoop, Hive, Python, SQL, Pyspark/Spark. Location –Hyderabad / Gurgaon;
Posted 1 week ago
12.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
To get the best candidate experience, please consider applying for a maximum of 3 roles within 12 months to ensure you are not duplicating efforts. Job Category Software Engineering Job Details About Salesforce We’re Salesforce, the Customer Company, inspiring the future of business with AI+ Data +CRM. Leading with our core values, we help companies across every industry blaze new trails and connect with customers in a whole new way. And, we empower you to be a Trailblazer, too — driving your performance and career growth, charting new paths, and improving the state of the world. If you believe in business as the greatest platform for change and in companies doing well and doing good – you’ve come to the right place. As an engineering leader, you will focus on developing the team around you. Bring your technical chops to drive your teams to success around feature delivery and live-site management for a complex cloud infrastructure service. You are as enthusiastic about recruiting and building a team as you are about challenging technical problems that your team will solve. You will also help shape, direct and execute our product vision. You’ll be challenged to blend customer-centric principles, industry-changing innovation, and the reliable delivery of new technologies. You will work directly with engineering, product, and design, to create experiences that reinforce the Salesforce brand by delighting and wowing our customers with highly reliable and available services. Responsibilities Drive the vision of enabling a full suite of Salesforce applications on Google Cloud in collaboration with teams across geographies. Build and lead a team of engineers to deliver cloud framweoks, infrastructure automation tools, workflows, and validation platforms on our public cloud platforms. Solid experience in building and evolving large scale distributed systems to reliably process billions of data points Proactively identify reliability & data quality problems and drive triaging and remediation process. Invest in continuous employee development of a highly technical team by mentoring and coaching engineers and technical leads in the team. Recruit and attract top talent. Drive execution and delivery by collaborating with cross functional teams, architects, product owners and engineers. Experience managing 2+ engineering teams. Experience building services on public cloud platforms like GCP, AWS, Azure Required Skills/Experiences B.S/M.S. in Computer Sciences or equivalent field. 12+ years of relevant experience in software development teams with 5+ years of experience managing teams Passionate, curious, creative, self-starter and approach problems with right methodology and intelligent decisions. Laser focus on impact, balancing effort to value, and getting things done. Experience providing mentorship, technical leadership, and guidance to team members. Strong customer service orientation and a desire to help others succeed. Top notch written and oral communication skills. Desired Skills/Experiences Working knowledge of modern technologies/services on public cloud is desirable Experience with container orchestration systems Kubernetes, Docker, Helios, Fleet Expertise in open source technologies like Elastic Search, Logstash, Kakfa, MongoDB, Hadoop, Spark, Trino/Presto, Hive, Airflow, Splunk Benefits & Perks Comprehensive benefits package including well-being reimbursement, generous parental leave, adoption assistance, fertility benefits, and more! World-class enablement and on-demand training with Trailhead.com Exposure to executive thought leaders and regular 1:1 coaching with leadership Volunteer opportunities and participation in our 1:1:1 model for giving back to the community For more details, visit https://www.salesforcebenefits.com/ Accommodations If you require assistance due to a disability applying for open positions please submit a request via this Accommodations Request Form. Posting Statement Salesforce is an equal opportunity employer and maintains a policy of non-discrimination with all employees and applicants for employment. What does that mean exactly? It means that at Salesforce, we believe in equality for all. And we believe we can lead the path to equality in part by creating a workplace that’s inclusive, and free from discrimination. Know your rights: workplace discrimination is illegal. Any employee or potential employee will be assessed on the basis of merit, competence and qualifications – without regard to race, religion, color, national origin, sex, sexual orientation, gender expression or identity, transgender status, age, disability, veteran or marital status, political viewpoint, or other classifications protected by law. This policy applies to current and prospective employees, no matter where they are in their Salesforce employment journey. It also applies to recruiting, hiring, job assignment, compensation, promotion, benefits, training, assessment of job performance, discipline, termination, and everything in between. Recruiting, hiring, and promotion decisions at Salesforce are fair and based on merit. The same goes for compensation, benefits, promotions, transfers, reduction in workforce, recall, training, and education. Show more Show less
Posted 1 week ago
2.0 - 4.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the role Analyse complex datasets and make it consumable using visual storytelling and visualization tools such as reports and dashboards built using approved tools (Tableau, PyDash) You will be responsible for Understands business needs and in depth understanding of Tesco processes- Builds on Tesco processes and knowledge by applying CI tools and techniques. - Responsible for completing tasks and transactions within agreed KPI's- Solves problems by analyzing solution alternatives-Engage with market leaders to understand problems to be solved, translate the business problems to analytical problems, taking ownership of specified analysis and translate the answers back to decision makers in business- Manipulating, analyzing and synthesizing large complex data sets using different sources and ensuring data quality and integrity- Think beyond the ask and develop analysis and reports that will contribute beyond basic asks- Accountable for high quality and timely completion of specified work deliverables and ad-hocs business asks- Write codes that are well detailed, structured, and compute efficient- Drive value delivery through efficiency gain by automating repeatable tasks, report creation or dashboard refresh- Collaborate with colleagues to craft, implement and measure consumption of analysis, reports and dashboards- Contribute to development of knowledge assets and reusable modules on GitHub/Wiki- Understands business needs and in depth understanding of Tesco processes- Responsible for completing tasks and transactions within agreed metrics- Experience in handling high volume, time pressured business asks and ad-hocs requests You will need 2-4 years experience preferred in analysis oriented delivery in any one of domains like retail, cpg, telecom or hospitality and for one of the following functional areas - marketing, supply chain, customer, space range and merchandising, operations, finance or digital will be preferredStrong understanding of Business Decisions, Skills to develop visualizations, self-service dashboards and reports using Tableau & Basic Statistical Concepts (Correlation Analysis and Hyp. Testing), Good Skills to analyze data using Adv Excel, Adv SQL, Hive, Phython, Data Warehousing concepts (Hadoop, Teradata), Automation using alteryx, python Whats in it for you? At Tesco, we are committed to providing the best for you. As a result, our colleagues enjoy a unique, differentiated, market- competitive reward package, based on the current industry practices, for all the work they put into serving our customers, communities and planet a little better every day. Our Tesco Rewards framework consists of pillars - Fixed Pay, Incentives, and Benefits. Total Rewards offered at Tesco is determined by four principles - simple, fair, competitive, and sustainable. Salary - Your fixed pay is the guaranteed pay as per your contract of employment. Performance Bonus - Opportunity to earn additional compensation bonus based on performance, paid annually Leave & Time-off - Colleagues are entitled to 30 days of leave (18 days of Earned Leave, 12 days of Casual/Sick Leave) and 10 national and festival holidays, as per the company’s policy. Making Retirement Tension-FreeSalary - In addition to Statutory retirement beneets, Tesco enables colleagues to participate in voluntary programmes like NPS and VPF. Health is Wealth - Tesco promotes programmes that support a culture of health and wellness including insurance for colleagues and their family. Our medical insurance provides coverage for dependents including parents or in-laws. Mental Wellbeing - We offer mental health support through self-help tools, community groups, ally networks, face-to-face counselling, and more for both colleagues and dependents. Financial Wellbeing - Through our financial literacy partner, we offer one-to-one financial coaching at discounted rates, as well as salary advances on earned wages upon request. Save As You Earn (SAYE) - Our SAYE programme allows colleagues to transition from being employees to Tesco shareholders through a structured 3-year savings plan. Physical Wellbeing - Our green campus promotes physical wellbeing with facilities that include a cricket pitch, football field, badminton and volleyball courts, along with indoor games, encouraging a healthier lifestyle. About Us Tesco in Bengaluru is a multi-disciplinary team serving our customers, communities, and planet a little better every day across markets. Our goal is to create a sustainable competitive advantage for Tesco by standardising processes, delivering cost savings, enabling agility through technological solutions, and empowering our colleagues to do even more for our customers. With cross-functional expertise, a wide network of teams, and strong governance, we reduce complexity, thereby offering high-quality services for our customers. Tesco in Bengaluru, established in 2004 to enable standardisation and build centralised capabilities and competencies, makes the experience better for our millions of customers worldwide and simpler for over 3,30,000 colleagues. Tesco Business Solutions: Established in 2017, Tesco Business Solutions (TBS) has evolved from a single entity traditional shared services in Bengaluru, India (from 2004) to a global, purpose-driven solutions-focused organisation. TBS is committed to driving scale at speed and delivering value to the Tesco Group through the power of decision science. With over 4,400 highly skilled colleagues globally, TBS supports markets and business units across four locations in the UK, India, Hungary, and the Republic of Ireland. The organisation underpins everything that the Tesco Group does, bringing innovation, a solutions mindset, and agility to its operations and support functions, building winning partnerships across the business. TBS's focus is on adding value and creating impactful outcomes that shape the future of the business. TBS creates a sustainable competitive advantage for the Tesco Group by becoming the partner of choice for talent, transformation, and value creation Show more Show less
Posted 1 week ago
8.0 - 15.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Position Summary... What you'll do... Role: Staff, Data Scientist Experience: 8- 15 years Location: Chennai About EBS team: Enterprise Business Services is invested in building a compact, robust organization that includes service operations and technology solutions for Finance, People, Associate Digital Experience. Our team is responsible for design and development of solution that knows our consumers needs better than ever by predicting what they want based on unconstrained demand, and efficiently unlock strategic growth, economic profit, and wallet share by orchestrating intelligent, connected planning and decisioning across all functions. We interact with multiple teams across the company to provide scalable robust technical solutions. This role will play crucial role in overseeing the planning, execution and delivery of complex projects within team Walmarts Enterprise Business Services (EBS) is a powerhouse of several exceptional teams delivering world-class technology solutions and services making a profound impact at every level of Walmart. As a key part of Walmart Global Tech, our teams set the bar for operational excellence and leverage emerging technology to support millions of customers, associates, and stakeholders worldwide. Each time an associate turns on their laptop, a customer makes a purchase, a new supplier is onboarded, the company closes the books, physical and legal risk is avoided, and when we pay our associates consistently and accurately, that is EBS. Joining EBS means embarking on a journey of limitless growth, relentless innovation, and the chance to set new industry standards that shape the future of Walmart. About Team The data science team at Enterprise Business Services Pillar at Walmart Global Tech focuses on using the latest research in machine learning, statistics, and optimization to solve business problems. We mine data, distill insights, extract information, build analytical models, deploy Machine Learning algorithms, and use the latest algorithms and technology to empower business decision-making. In addition, we work with engineers to build reference architectures and machine learning pipelines in a big data ecosystem to productize our solutions. Advanced analytical algorithms driven by our team will help Walmart to optimize business operations, business practices and change the way our customers shop. The data science community at Walmart Global Tech is active in most of the Hack events, utilizing the petabytes of data at our disposal, to build some of the coolest ideas. All the work we do at Walmart Labs will eventually benefit our operations ; our associates, helping Customers Save Money to Live Better. What You Will Do As a Staff Data Scientist for Walmart Global Tech, you'll have the opportunity to Drive data-derived insights across a wide range of retail ; Finance divisions by developing advanced statistical models, machine learning algorithms and computational algorithms based on business initiatives Direct the gathering of data, assess data validity and synthesize data into large analytics datasets to support project goals Utilize big data analytics and advanced data science techniques to identify trends, patterns, and discrepancies in data. Determine additional data needed to support insights Build and train AI/ML models for replication for future projects Deploy and maintain the data science solutions Communicate recommendations to business partners and influence future plans based on insights Consult with business stakeholders regarding algorithm-based recommendations and be a thought-leader to develop these into business actions. Closely partners with the Senior Manager ; Director of Data Science to drive data science adoption in the domain Guides. data scientists, senior data scientists ; staff data scientists across multiple sub-domains to ensure on-time delivery of ML products Drive efficiency across the domain in terms of DS and ML best practices, ML Ops practices, resource utilization, reusability and multi-tenancy. Lead multiple complex ML products and guide senior tech leads in the domain in efficiently leading their products. Drive synergies across different products in terms of algorithmic innovation and sharing of best practices. Proactive identification of complex business problems that can be solved using advanced ML, finding opportunities and gaps in the current business domain Evaluates proposed business cases for projects and initiatives What You Will Bring Masters with > 10 years OR Ph.D. with > 8 years of relevant experience. Educational qualifications should be Computer Science/Statistics/Mathematics or a related area. Minimum 6 years of experience as a data science technical lead Ability to lead multiple data science projects end to end. Deep experience in building data science solution in areas like fraud prevention, forecasting, shrink and waste reduction, inventory management, recommendation, assortment and price optimization Deep experience in simultaneously leading multiple data science initiatives end to end – from translating business needs to analytical asks, leading the process of building solutions and the eventual act of deployment and maintenance of them Strong experience in machine learning: Classification models, regression models, NLP, Forecasting, Unsupervised models, Optimization, Graph ML, Causal inference, Causal ML, Statistical Learning, experimentation ; Gen-AI In Gen-AI, it is desirable to have experience in embedding generation from training materials, storage and retrieval from Vector Databases, set-up and provisioning of managed LLM gateways, development of Retrieval augmented generation based LLM agents, model selection, iterative prompt engineering and finetuning based on accuracy and user-feedback, monitoring and governance. Ability to scale and deploy data science solutions. Strong Experience with one or more of Python and R. Experience in GCP/Azure Strong Experience in Python, PySpark Google Cloud platform, Vertex AI, Kubeflow, model deployment Strong Experience with big data platforms – Hadoop (Hive, Map Reduce, HQL, Scala) Experience with GPU/CUDA for computational efficiency About Walmart Global Tech Imagine working in an environment where one line of code can make life easier for hundreds of millions of people. That’s what we do at Walmart Global Tech. We’re a team of software engineers, data scientists, cybersecurity expert's and service professionals within the world’s leading retailer who make an epic impact and are at the forefront of the next retail disruption. People are why we innovate, and people power our innovations. We are people-led and tech-empowered. We train our team in the skillsets of the future and bring in experts like you to help us grow. We have roles for those chasing their first opportunity as well as those looking for the opportunity that will define their career. Here, you can kickstart a great career in tech, gain new skills and experience for virtually every industry, or leverage your expertise to innovate at scale, impact millions and reimagine the future of retail. Flexible, hybrid work We use a hybrid way of working with primary in office presence coupled with an optimal mix of virtual presence. We use our campuses to collaborate and be together in person, as business needs require and for development and networking opportunities. This approach helps us make quicker decisions, remove location barriers across our global team, be more flexible in our personal lives. Benefits Beyond our great compensation package, you can receive incentive awards for your performance. Other great perks include a host of best-in-class benefits maternity and parental leave, PTO, health benefits, and much more. Belonging We aim to create a culture where every associate feels valued for who they are, rooted in respect for the individual. Our goal is to foster a sense of belonging, to create opportunities for all our associates, customers and suppliers, and to be a Walmart for everyone. At Walmart, our vision is "everyone included." By fostering a workplace culture where everyone is—and feels—included, everyone wins. Our associates and customers reflect the makeup of all 19 countries where we operate. By making Walmart a welcoming place where all people feel like they belong, we’re able to engage associates, strengthen our business, improve our ability to serve customers, and support the communities where we operate. Equal Opportunity Employer Walmart, Inc., is an Equal Opportunities Employer – By Choice. We believe we are best equipped to help our associates, customers and the communities we serve live better when we really know them. That means understanding, respecting and valuing unique styles, experiences, identities, ideas and opinions – while being inclusive of all people. Minimum Qualifications... Outlined below are the required minimum qualifications for this position. If none are listed, there are no minimum qualifications. Minimum Qualifications:Option 1: Bachelors degree in Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology or related field and 4 years' experience in an analytics related field. Option 2: Masters degree in Statistics, Economics, Analytics, Mathematics, Computer Science, Information Technology or related field and 2 years' experience in an analytics related field. Option 3: 6 years' experience in an analytics or related field. Preferred Qualifications... Outlined below are the optional preferred qualifications for this position. If none are listed, there are no preferred qualifications. Primary Location... Rmz Millenia Business Park, No 143, Campus 1B (1St -6Th Floor), Dr. Mgr Road, (North Veeranam Salai) Perungudi , India R-2182242 Show more Show less
Posted 1 week ago
7.0 years
0 Lacs
Greater Kolkata Area
On-site
Overview Working at Atlassian Atlassians can choose where they work – whether in an office, from home, or a combination of the two. That way, Atlassians have more control over supporting their family, personal goals, and other priorities. We can hire people in any country where we have a legal entity. Interviews and onboarding are conducted virtually, a part of being a distributed-first company. Responsibilities Atlassian is looking for a Senior Data Engineer to join our Data Engineering team which is responsible for building our data lake, maintaining our big data pipelines / services and facilitating the movement of billions of messages each day. We work directly with the business stakeholders and plenty of platform and engineering teams to enable growth and retention strategies at Atlassian. We are looking for an open-minded, structured thinker who is passionate about building services that scale. On a typical day you will help our stakeholder teams ingest data faster into our data lake, you’ll find ways to make our data pipelines more efficient, or even come up ideas to help instigate self-serve data engineering within the company. You’ll get the opportunity to work on a AWS based data lake backed by the full suite of open source projects such as Spark and Airflow. We are a team with little legacy in our tech stack and as a result you’ll spend less time paying off technical debt and more time identifying ways to make our platform better and improve our users experience. Qualifications As a Senior Data Engineer in the DE team, you will have the opportunity to apply your strong technical experience building highly reliable services on managing and orchestrating a multi-petabyte scale data lake. You enjoy working in a fast paced environment and you are able to take vague requirements and transform them into solid solutions. You are motivated by solving challenging problems, where creativity is as crucial as your ability to write code and test cases. On Your First Day, We'll Expect You To Have A BS in Computer Science or equivalent experience At least 7+ years professional experience as a Sr. Software Engineer or Sr. Data Engineer Strong programming skills (Python, Java or Scala preferred) Experience writing SQL, structuring data, and data storage practices Experience with data modeling Knowledge of data warehousing concepts Experience building data pipelines, platforms Experience with Databricks, Spark, Hive, Airflow and other streaming technologies to process incredible volumes of streaming data Experience in modern software development practices (Agile, TDD, CICD) Strong focus on data quality and experience with internal/external tools/frameworks to automatically detect data issues, anomalies. A willingness to accept failure, learn and try again An open mind to try solutions that may seem crazy at first Experience working on Amazon Web Services (in particular using EMR, Kinesis, RDS, S3, SQS and the like) It's Preferred That You Have Experience building self-service tooling and platforms Built and designed Kappa architecture platforms Contributed to open source projects (Ex: Operators in Airflow) Experience with Data Build Tool (DBT) Our Perks & Benefits Atlassian offers a wide range of perks and benefits designed to support you, your family and to help you engage with your local community. Our offerings include health and wellbeing resources, paid volunteer days, and so much more. To learn more, visit go.atlassian.com/perksandbenefits . About Atlassian At Atlassian, we're motivated by a common goal: to unleash the potential of every team. Our software products help teams all over the planet and our solutions are designed for all types of work. Team collaboration through our tools makes what may be impossible alone, possible together. We believe that the unique contributions of all Atlassians create our success. To ensure that our products and culture continue to incorporate everyone's perspectives and experience, we never discriminate based on race, religion, national origin, gender identity or expression, sexual orientation, age, or marital, veteran, or disability status. All your information will be kept confidential according to EEO guidelines. To provide you the best experience, we can support with accommodations or adjustments at any stage of the recruitment process. Simply inform our Recruitment team during your conversation with them. To learn more about our culture and hiring process, visit go.atlassian.com/crh . Show more Show less
Posted 1 week ago
0.0 - 10.0 years
0 Lacs
Delhi, Delhi
On-site
Job Profile* *Role:* AI Developer *Location:* Delhi *Experience:* 6-10 Years *Qualifications:* 1. Bachelor’s or Master’s degree in Computer Science, Information Technology, or a related field. 2. Proven experience of 6-10 years as an AI Developer or similar role. 3. Proficient in coding and ability to develop and implement AI models and algorithms from scratch. 4. Strong knowledge of AI frameworks and libraries. 5. Proficiency in data manipulation and analysis methods. 6. Excellent problem-solving abilities and attention to detail. 7. Good communication and teamwork skills. *Responsibilities:* 1. Implement AI solutions that seamlessly integrate with existing business systems to enhance functionality and user interaction. 2. Manage the data flow and infrastructure for the effective functioning of the AI Department. 3. Design, develop, and implement AI models and algorithms from scratch. 4. Collaborate with the IT team to ensure the successful deployment of AI models. 5. Continuously research and implement new AI technologies to improve existing systems. 6. Maintain up-to-date knowledge of AI and machine learning trends and advancements. 7. Provide technical guidance and support to the team as needed. *Coding Knowledge Required:* 1. Proficiency in programming languages like Python, Java, R, etc. 2. Experience with machine learning frameworks like TensorFlow or PyTorch. 3. Knowledge of cloud platforms like AWS, Google Cloud, or Azure. 4. Familiarity with databases, both SQL and NoSQL. 5. Understanding of data structures, data modeling, and software architecture. 6. Experience with distributed data/computing tools like Hadoop, Hive, Spark, etc. Job Type: Full-time Pay: ₹14,214.66 - ₹66,535.00 per month Schedule: Day shift Work Location: In person
Posted 1 week ago
7.0 - 10.0 years
8 - 14 Lacs
Hyderabad
Hybrid
Responsibilities of the Candidate : - Be responsible for the design and development of big data solutions. Partner with domain experts, product managers, analysts, and data scientists to develop Big Data pipelines in Hadoop - Be responsible for moving all legacy workloads to a cloud platform - Work with data scientists to build Client pipelines using heterogeneous sources and provide engineering services for data PySpark science applications - Ensure automation through CI/CD across platforms both in cloud and on-premises - Define needs around maintainability, testability, performance, security, quality, and usability for the data platform - Drive implementation, consistent patterns, reusable components, and coding standards for data engineering processes - Convert SAS-based pipelines into languages like PySpark, and Scala to execute on Hadoop and non-Hadoop ecosystems - Tune Big data applications on Hadoop and non-Hadoop platforms for optimal performance - Apply an in-depth understanding of how data analytics collectively integrate within the sub-function as well as coordinate and contribute to the objectives of the entire function. - Produce a detailed analysis of issues where the best course of action is not evident from the information available, but actions must be recommended/taken. - Assess risk when business decisions are made, demonstrating particular consideration for the firm's reputation and safeguarding Citigroup, its clients, and assets, by driving compliance with applicable laws, rules, and regulations, adhering to Policy, applying sound ethical judgment regarding personal behavior, conduct, and business practices, and escalating, managing and reporting control issues with transparency Requirements : - 6+ years of total IT experience - 3+ years of experience with Hadoop (Cloudera)/big data technologies - Knowledge of the Hadoop ecosystem and Big Data technologies Hands-on experience with the Hadoop eco-system (HDFS, MapReduce, Hive, Pig, Impala, Spark, Kafka, Kudu, Solr) - Experience in designing and developing Data Pipelines for Data Ingestion or Transformation using Java Scala or Python. - Experience with Spark programming (Pyspark, Scala, or Java) - Hands-on experience with Python/Pyspark/Scala and basic libraries for machine learning is required. - Proficient in programming in Java or Python with prior Apache Beam/Spark experience a plus. - Hand on experience in CI/CD, Scheduling and Scripting - Ensure automation through CI/CD across platforms both in cloud and on-premises - System level understanding - Data structures, algorithms, distributed storage & compute - Can-do attitude on solving complex business problems, good interpersonal and teamwork skills
Posted 1 week ago
5.0 - 10.0 years
32 Lacs
Bengaluru
Work from Office
Responsibilities: Ability to design and build Python-based code generation framework and runtime engine by reading Business Rules repository in order to. Requirements: Minimum 5 years of experience in build & deployment of Bigdata applications using SparkSQL, SparkStreaming in Python; Expertise on graph algorithms and advanced recursion techniques; Minimum 5 years of extensive experience in design, build and deployment of Python-based applications; Minimum 3 years of experience in the following: HIVE, YARN, Kafka, HBase, MongoDB; Hands-on experience in generating/parsing XML, JSON documents, and REST API request/responses; Bachelors degree in a quantitative field (such as Engineering, Computer Science, Statistics, Econometrics) and a minimum of 5 years of experience; Expertise in handling complex large-scale Big Data environments preferably (20Tb+); Hands-on experience writing complex SQL queries, exporting and importing large amounts of data using utilities.
Posted 1 week ago
5.0 years
0 Lacs
Pune, Maharashtra, India
Remote
About the Team Data is at the foundation of DoorDash success. The Data Engineering team builds database solutions for various use cases including reporting, product analytics, marketing optimization and financial reporting. By implementing pipelines, data structures, and data warehouse architectures; this team serves as the foundation for decision-making at DoorDash. About the Role DoorDash is looking for a Senior Data Engineer to be a technical powerhouse to help us scale our data infrastructure, automation and tools to meet growing business needs. You're excited about this opportunity because you will… Work with business partners and stakeholders to understand data requirements Work with engineering, product teams and 3rd parties to collect required data Design, develop and implement large scale, high volume, high performance data models and pipelines for Data Lake and Data Warehouse Develop and implement data quality checks, conduct QA and implement monitoring routines Improve the reliability and scalability of our ETL processes Manage a portfolio of data products that deliver high-quality, trustworthy data Help onboard and support other engineers as they join the team We're excited about you because… 5+ years of professional experience 3+ years experience working in data engineering, business intelligence, or a similar role Proficiency in programming languages such as Python/Java 3+ years of experience in ETL orchestration and workflow management tools like Airflow, Flink, Oozie and Azkaban using AWS/GCP Expert in Database fundamentals, SQL and distributed computing 3+ years of experience with the Distributed data/similar ecosystem (Spark, Hive, Druid, Presto) and streaming technologies such as Kafka/Flink. Experience working with Snowflake, Redshift, PostgreSQL and/or other DBMS platforms Excellent communication skills and experience working with technical and non-technical teams Knowledge of reporting tools such as Tableau, Superset and Looker Comfortable working in fast paced environment, self starter and self organizing Ability to think strategically, analyze and interpret market and consumer information You must be located near one of our engineering hubs indicated above Notice to Applicants for Jobs Located in NYC or Remote Jobs Associated With Office in NYC Only We use Covey as part of our hiring and/or promotional process for jobs in NYC and certain features may qualify it as an AEDT in NYC. As part of the hiring and/or promotion process, we provide Covey with job requirements and candidate submitted applications. We began using Covey Scout for Inbound from August 21, 2023, through December 21, 2023, and resumed using Covey Scout for Inbound again on June 29, 2024. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: Covey About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of users—from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. About DoorDash At DoorDash, our mission to empower local economies shapes how our team members move quickly, learn, and reiterate in order to make impactful decisions that display empathy for our range of users—from Dashers to merchant partners to consumers. We are a technology and logistics company that started with door-to-door delivery, and we are looking for team members who can help us go from a company that is known for delivering food to a company that people turn to for any and all goods. DoorDash is growing rapidly and changing constantly, which gives our team members the opportunity to share their unique perspectives, solve new challenges, and own their careers. We're committed to supporting employees' happiness, healthiness, and overall well-being by providing comprehensive benefits and perks. Our Commitment to Diversity and Inclusion We're committed to growing and empowering a more inclusive community within our company, industry, and cities. That's why we hire and cultivate diverse teams of people from all backgrounds, experiences, and perspectives. We believe that true innovation happens when everyone has room at the table and the tools, resources, and opportunity to excel. If you need any accommodations, please inform your recruiting contact upon initial connection. We use Covey as part of our hiring and/or promotional process for jobs in certain locations. The Covey tool has been reviewed by an independent auditor. Results of the audit may be viewed here: https://getcovey.com/nyc-local-law-144 To request a reasonable accommodation under applicable law or alternate selection process, please inform your recruiting contact upon initial connection. Show more Show less
Posted 1 week ago
6.0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
About the job A little about us... LTIMindtree is a global technology consulting and digital solutions company that enables enterprises across industries to reimagine business models, accelerate innovation, and maximize growth by harnessing digital technologies. As a digital transformation partner to more than 750 clients, LTIMindtree brings extensive domain and technology expertise to help drive superior competitive differentiation, customer experiences, and business outcomes in a converging world. Powered by nearly 90,000 talented and entrepreneurial professionals across 35 countries, LTIMindtree — a Larsen & Toubro Group company — combines the industry-acclaimed strengths of erstwhile Larsen and Toubro Infotech and Mindtree in solving the most complex business challenges and delivering transformation at scale. For more info, please visit www.ltimindtree.com Job Details- We are having a weekend drive for the requirement of a Data Scientist at our Bangalore office. Date - 14th June Experience - 4 to 12 Yrs. Location – LTIMindtree Office Bangalore Whitefield Notice Period - Immediate to 60 Days only Mandatory Skills- Gen-AI, Data Science, Python, RAG and Cloud (AWS/Azure) Secondary - (Any) - Machine Learning, Deep Learning, ChatGPT, Langchain, Prompt, vector stores, RAG, llama, Computer vision, Deep learning, Machine learning, OCR, Transformer, regression, forecasting, classification, hyper parameter tunning, MLOps, Inference, Model training, Model Deployment Generic JD : More than 6 years of experience in Data Engineering, Data Science and AI / ML domain Excellent understanding of machine learning techniques and algorithms, such as GPTs, CNN, RNN, k-NN, Naive Bayes, SVM, Decision Forests, etc. Experience using business intelligence tools (e.g. Tableau, PowerBI) and data frameworks (e.g. Hadoop) Experience in Cloud native skills. Knowledge of SQL and Python; familiarity with Scala, Java or C++ is an asset Analytical mind and business acumen and Strong math skills (e.g. statistics, algebra) Experience with common data science toolkits, such as TensorFlow, KERAs, PyTorch, PANDAs, Microsoft CNTK, NumPy etc. Deep expertise in at least one of these is highly desirable. Experience with NLP, NLG and Large Language Models like – BERT, LLaMa, LaMDA, GPT, BLOOM, PaLM, DALL-E, etc. Great communication and presentation skills. Should have experience in working in a fast-paced team culture. Experience with AIML and Big Data technologies like – AWS SageMaker, Azure Cognitive Services, Google Colab, Jupyter Notebook, Hadoop, PySpark, HIVE, AWS EMR etc. Experience with NoSQL databases, such as MongoDB, Cassandra, HBase, Vector databases Good understanding of applied statistics skills, such as distributions, statistical testing, regression, etc. Should be a data-oriented person with analytical mind and business acumen. Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
As a Software Developer you will work in a constantly evolving environment, due to technological advances and the strategic direction of the organization you work for. You will create, maintain, audit, and improve systems to meet particular needs, often as advised by a systems analyst or architect, testing both hard and software systems to diagnose and resolve system faults. The role also covers writing diagnostic programs and designing and writing code for operating systems and software to ensure efficiency. When required, you will make recommendations for future developments Benefits of Joining Us Challenging Projects : Work on cutting-edge projects and solve complex technical problems. Career Growth : Advance your career quickly and take on leadership roles. Mentorship : Learn from experienced mentors and industry experts. Global Opportunities : Work with clients from around the world and gain international experience. Competitive Compensation : Receive attractive compensation packages and benefits. If you're passionate about technology and want to work on challenging projects with a talented team, becoming an Infosys Power Programmer could be a great career choice. Mandatory Skills AWS Glue, AWS Redshift/Spectrum, S3, API Gateway, Athena, Step and Lambda functions Experience in Extract Transform Load (ETL) and Extract Load & Transform (ELT) data integration pattern. Experience in designing and building data pipelines. Development experience in one or more object-oriented programming languages, preferably Python Job Specs 5+ years of in depth hands on experience of developing, testing, deployment and debugging of Spark Jobs using Scala in Hadoop Platform In depth knowledge of Spark Core, working with RDDs, Spark SQL In depth knowledge on Spark Optimization Techniques and Best practices Good Knowledge of Scala Functional Programming: Try, Option, Future, Collections Good Knowledge of Scala OOPS: Classes, Traits and Objects (Singleton and Companion), Case Classes Good Understanding of Scala Language Features: Type System, Implicit/Givens Hands on experience of working in Hadoop Environment (HDFS/Hive), AWS S3, EMR Python programming skills Working experience on Workflow Orchestration tools like Airflow, Oozie Working with API calls in Scala Understanding and exposure to file formats such as Apache AVRO, Parquet, JSON Good to have knowledge of Protocol Buffers and Geospatial data analytics. Writing Test cases using frameworks such as scalatest. Good Knowledge of Build Tools such as: Gradle & SBT in depth Experience on using GIT, resolving conflicts, working with branches. Good to have worked on some workflow systems as Airflow Strong programming skills using data structures and algorithms. Excellent analytical skills Good communication skills Qualification 7-10 Yrs in the industry BE/B.tech CS or equivalent Show more Show less
Posted 1 week ago
0 years
0 Lacs
Bengaluru, Karnataka, India
On-site
Job Summary: We are seeking a skilled and innovative Data Scientist to join our team. The ideal candidate will have hands-on experience in AI/ML model development , Big Data processing , and working with cloud platforms like AWS . You will be responsible for analyzing large-scale datasets, developing machine learning models, and delivering actionable insights to drive business decisions. Key Responsibilities: Design, develop, and deploy machine learning and AI models to solve complex business problems. Work with large datasets using Big Data technologies (Hadoop, Spark, etc.). Build scalable data pipelines and workflows using Python, SQL, and cloud-native tools. Implement data preprocessing, feature engineering, and model tuning techniques. Use AWS services (e.g., S3, SageMaker, Lambda, EMR) for data processing, model training, and deployment. Collaborate with data engineers, analysts, and business stakeholders to define requirements and deliver solutions. Communicate findings clearly through dashboards, reports, or presentations. Required Skills: Strong programming skills in Python and experience with libraries like Pandas, NumPy, Scikit-learn, TensorFlow, or PyTorch. Experience with Big Data frameworks such as Spark , Hadoop , or Hive . Hands-on experience with AWS cloud services related to data science and ML. Solid understanding of machine learning algorithms, model evaluation, and tuning. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala ; Minimum 3 years of experience on Cloud Data Platforms on AWS; Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 week ago
3.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Description Amazon Retail Financial Intelligence Systems is seeking a seasoned and talented Senior Data Engineer to join the Fortune Platform team. Fortune is a fast growing team with a mandate to build tools to automate profit-and-loss forecasting and planning for the Physical Consumer business. We are building the next generation Business Intelligence solutions using big data technologies such as Apache Spark, Hive/Hadoop, and distributed query engines. As a Data Engineer in Amazon, you will be working in a large, extremely complex and dynamic data environment. You should be passionate about working with big data and are able to learn new technologies rapidly and evaluate them critically. You should have excellent communication skills and be able to work with business owners to translate business requirements into system solutions. You are a self-starter, comfortable with ambiguity, and working in a fast-paced and ever-changing environment. Ideally, you are also experienced with at least one of the programming languages such as Java, C++, Spark/Scala, Python, etc. Major Responsibilities Work with a team of product and program managers, engineering leaders, and business leaders to build data architectures and platforms to support business Design, develop, and operate high-scalable, high-performance, low-cost, and accurate data pipelines in distributed data processing platforms Recognize and adopt best practices in data processing, reporting, and analysis: data integrity, test design, analysis, validation, and documentation Keep up to date with big data technologies, evaluate and make decisions around the use of new or existing software products to design the data architecture Design, build and own all the components of a high-volume data warehouse end to end. Provide end-to-end data engineering support for project lifecycle execution (design, execution and risk assessment) Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Interface with other technology teams to extract, transform, and load (ETL) data from a wide variety of data sources Own the functional and nonfunctional scaling of software systems in your ownership area. Implement big data solutions for distributed computing. Key job responsibilities As a DE on our team, you will be responsible for leading the data modelling, database design, and launch of some of the core data pipelines. You will have significant influence on our overall strategy by helping define the data model, drive the database design, and spearhead the best practices to delivery high quality products. About The Team Profit intelligence systems measures, predicts true profit(/loss) for each item as a result of a specific shipment to an Amazon customer. Profit Intelligence is all about providing intelligent ways for Amazon to understand profitability across retail business. What are the hidden factors driving the growth or profitability across millions of shipments each day? We compute the profitability of each and every shipment that gets shipped out of Amazon. Guess what, we predict the profitability of future possible shipments too. We are a team of agile, can-do engineers, who believe that not only are moon shots possible but that they can be done before lunch. All it takes is finding new ideas that challenge our preconceived notions of how things should be done. Process and procedure matter less than ideas and the practical work of getting stuff done. This is a place for exploring the new and taking risks. We push the envelope in using cloud services in AWS as well as the latest in distributed systems, forecasting algorithms, and data mining. Basic Qualifications 3+ years of data engineering experience Experience with data modeling, warehousing and building ETL pipelines Experience with SQL Preferred Qualifications Experience with AWS technologies like Redshift, S3, AWS Glue, EMR, Kinesis, FireHose, Lambda, and IAM roles and permissions Experience with non-relational databases / data stores (object storage, document or key-value stores, graph databases, column-family databases) Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI MAA 12 SEZ Job ID: A3006789 Show more Show less
Posted 1 week ago
5.0 years
0 Lacs
Chennai, Tamil Nadu, India
On-site
Greetings from TCS! TCS is hiring for Big Data (PySpark & Scala) Location: - Chennai Desired Experience Range: 5 + Years Must-Have • PySpark • Hive Good-to-Have • Spark • HBase • DQ tool • Agile Scrum experience • Exposure in data ingestion from disparate sources onto Big Data platform Thanks Anshika Show more Show less
Posted 1 week ago
1.0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Description Business Data Technologies (BDT) makes it easier for teams across Amazon to produce, store, catalog, secure, move, and analyze data at massive scale. Our managed solutions combine standard AWS tooling, open-source products, and custom services to free teams from worrying about the complexities of operating at Amazon scale. This lets BDT customers move beyond the engineering and operational burden associated with managing and scaling platforms, and instead focus on scaling the value they can glean from their data, both for their customers and their teams. We own the one of the biggest (largest) data lakes for Amazon where 1000’s of Amazon teams can search, share, and store EB (Exabytes) of data in a secure and seamless way; using our solutions, teams around the world can schedule/process millions of workloads on a daily basis. We provide enterprise solutions that focus on compliance, security, integrity, and cost efficiency of operating and managing EBs of Amazon data. Key job responsibilities Core Responsibilities Be hands-on with ETL to build data pipelines to support automated reporting Interface with other technology teams to extract, transform, and load data from a wide variety of data sources Implement data structures using best practices in data modeling, ETL/ELT processes, and SQL, Redshift. Model data and metadata for ad-hoc and pre-built reporting Interface with business customers, gathering requirements and delivering complete reporting solutions Build robust and scalable data integration (ETL) pipelines using SQL, Python and Spark. Build and deliver high quality data sets to support business analyst, data scientists, and customer reporting needs. Continually improve ongoing reporting and analysis processes, automating or simplifying self-service support for customers Participate in strategic & tactical planning discussions A day in the life As a Data Engineer, You Will Be Working With Cross-functional Partners From Science, Product, SDEs, Operations And Leadership To Translate Raw Data Into Actionable Insights For Stakeholders, Empowering Them To Make Data-driven Decisions. Some Of The Key Activities Include Crafting the Data Flow: Design and build data pipelines, the backbone of our data ecosystem. Ensure the integrity of the data journey by implementing robust data quality checks and monitoring processes. Architect for Insights: Translate complex business requirements into efficient data models that optimize data analysis and reporting. Automate data processing tasks to streamline workflows and improve efficiency. Become a data detective! ensuring data availability and performance Basic Qualifications 1+ years of data engineering experience Experience with SQL Experience with data modeling, warehousing and building ETL pipelines Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) Experience with one or more scripting language (e.g., Python, KornShell) Preferred Qualifications Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Knowledge of cloud services such as AWS or equivalent Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner. Company - ADCI HYD 13 SEZ Job ID: A3006419 Show more Show less
Posted 1 week ago
0.0 years
0 Lacs
Bengaluru, Karnataka
On-site
- 1+ years of data engineering experience - Experience with SQL - Experience with data modeling, warehousing and building ETL pipelines - Experience with one or more query language (e.g., SQL, PL/SQL, DDL, MDX, HiveQL, SparkSQL, Scala) - Experience with one or more scripting language (e.g., Python, KornShell) The Prime Data Engineering & Analytics (PDEA) team is seeking to hire passionate Data Engineers to build and manage the central petabyte-scale data infrastructure supporting the worldwide Prime business operations. At Amazon Prime, understanding customer data is paramount to our success in providing customers with relevant and enticing benefits such as fast free shipping, instant videos, streaming music and free Kindle books in the US and international markets. At Amazon you will be working in one of the world's largest and most complex data environments. You will be part of team that will work with the marketing, retail, finance, analytics, machine learning and technology teams to provide real time data processing solution that give Amazon leadership, marketers, PMs timely, flexible and structured access to customer insights. The team will be responsible for building this platform end to end using latest AWS technologies and software development principles. As a Data Engineer, you will be responsible for leading the architecture, design and development of the data, metrics and reporting platform for Prime. You will architect and implement new and automated Business Intelligence solutions, including big data and new analytical capabilities that support our Development Engineers, Analysts and Retail business stakeholders with timely, actionable data, metrics and reports while satisfying scalability, reliability, accuracy, performance and budget goals and driving automation and operational efficiencies. You will partner with business leaders to drive strategy and prioritize projects and feature sets. You will also write and review business cases and drive the development process from design to release. In addition, you will provide technical leadership and mentoring for a team of highly capable Data Engineers. Responsibilities 1. Own design and execution of end to end projects 2. Own managing WW Prime core services data infrastructure 3. Establish key relationships which span Amazon business units and Business Intelligence teams 4. Implement standardized, automated operational and quality control processes to deliver accurate and timely data and reporting to meet or exceed SLAs Experience with big data technologies such as: Hadoop, Hive, Spark, EMR Experience with any ETL tool like, Informatica, ODI, SSIS, BODI, Datastage, etc. Our inclusive culture empowers Amazonians to deliver the best results for our customers. If you have a disability and need a workplace accommodation or adjustment during the application and hiring process, including support for the interview or onboarding process, please visit https://amazon.jobs/content/en/how-we-hire/accommodations for more information. If the country/region you’re applying in isn’t listed, please contact your Recruiting Partner.
Posted 1 week ago
0 years
0 Lacs
Gurugram, Haryana, India
On-site
Roles And Responsibilities Proficiency in building highly scalable ETL and streaming-based data pipelines using Google Cloud Platform (GCP) services and products like Biquark, Cloud Dataflow Proficiency in large scale data platforms and data processing systems such as Google Big Query, Amazon Redshift, Azure Data Lake Excellent Python, PySpark and SQL development and debugging skills, exposure to other Big Data frameworks like Hadoop Hive would be added advantage Experience building systems to retrieve and aggregate data from event-driven messaging frameworks (e.g. RabbitMQ and Pub/Sub) Secondary Skills : Cloud Big Table, AI/ML solutions, Compute Engine, Cloud Fusion (ref:hirist.tech) Show more Show less
Posted 1 week ago
8.0 years
0 Lacs
Pune, Maharashtra, India
Remote
Role: Data QA Lead Experience Required- 8+ Years Location- India/Remote Company Overview At Codvo.ai, software and people transformations go hand-in-hand. We are a global empathy-led technology services company. Product innovation and mature software engineering are part of our core DNA. Respect, Fairness, Growth, Agility, and Inclusiveness are the core values that we aspire to live by each day. We continue to expand our digital strategy, design, architecture, and product management capabilities to offer expertise, outside-the-box thinking, and measurable results. The Data Quality Analyst is responsible for ensuring the quality, accuracy, and consistency of data within the Customer and Loan Master Data API solution. This role will work closely with data owners, data modelers, and developers to identify and resolve data quality issues. Key Responsibilities Lead and manage end-to-end ETL/data validation activities. Design test strategy, plans, and scenarios for source-to-target validation. Build automated data validation frameworks (SQL/Python/Great Expectations). Integrate tests with CI/CD pipelines (Jenkins, Azure DevOps). Perform data integrity, transformation logic, and reconciliation checks. Collaborate with Data Engineering, Product, and DevOps teams. Drive test metrics reporting, defect triage, and root cause analysis. Mentor QA team members and ensure process adherence. Must-Have Skills 8+ years in QA with 4+ years in ETL testing. Strong SQL and database testing experience. Proficiency with ETL tools (Airbyte, DBT, Informatica, etc.). Automation using Python or similar scripting language. Solid understanding of data warehousing, SCD, deduplication. Experience with large datasets and structured/unstructured formats. Preferred Skills Knowledge of data orchestration tools (Prefect, Airflow). Familiarity with data quality/observability tools. Experience with big data systems (Spark, Hive). Hands-on with test data generation (Faker, Mockaroo). Show more Show less
Posted 1 week ago
4.0 - 6.0 years
0 Lacs
Noida, Uttar Pradesh, India
On-site
Description We are looking for a skilled Data Engineer to join our existing team. Candidates will be working on Big Data applications using cutting edge technologies including Google Cloud Platform. The position offers 1st hand exposure to build data pipelines that can process peta-byte scale of data solving complex business problems Requirements Mandatory Skills: 4-6 years of hands-on experience in Data Engineering. Experience in writing and optimizing SQL queries in HIVE/Spark. Excellent coding and/or scripting skills in Python. Good experience in deploying spark applications in Kubernetes cluster. Good experience in development, deployment & troubleshooting of Spark applications. Exposure to any cloud environment (AWS/GCP preferred) Job responsibilities Role Description: Candidate will be part of an agile team. Development/Migration of new data pipelines. Optimizing/Fine-tuning existing workflows. Deploying Spark tasks on K8 clusters. Bringing new ideas for performance enhancement of data pipelines running on K8s.. Mandatory Skills: 4-6 years of hands-on experience in Data Engineering. Experience in writing and optimizing SQL queries in HIVE/Spark. Excellent coding and/or scripting skills in Python. Good experience in deploying spark applications in Kubernetes cluster. Good experience in development, deployment & troubleshooting of Spark applications. Exposure to any cloud environment (AWS/GCP preferred) What we offer Culture of caring. At GlobalLogic, we prioritize a culture of caring. Across every region and department, at every level, we consistently put people first. From day one, you’ll experience an inclusive culture of acceptance and belonging, where you’ll have the chance to build meaningful connections with collaborative teammates, supportive managers, and compassionate leaders. Learning and development. We are committed to your continuous learning and development. You’ll learn and grow daily in an environment with many opportunities to try new things, sharpen your skills, and advance your career at GlobalLogic. With our Career Navigator tool as just one example, GlobalLogic offers a rich array of programs, training curricula, and hands-on opportunities to grow personally and professionally. Interesting & meaningful work. GlobalLogic is known for engineering impact for and with clients around the world. As part of our team, you’ll have the chance to work on projects that matter. Each is a unique opportunity to engage your curiosity and creative problem-solving skills as you help clients reimagine what’s possible and bring new solutions to market. In the process, you’ll have the privilege of working on some of the most cutting-edge and impactful solutions shaping the world today. Balance and flexibility. We believe in the importance of balance and flexibility. With many functional career areas, roles, and work arrangements, you can explore ways of achieving the perfect balance between your work and life. Your life extends beyond the office, and we always do our best to help you integrate and balance the best of work and life, having fun along the way! High-trust organization. We are a high-trust organization where integrity is key. By joining GlobalLogic, you’re placing your trust in a safe, reliable, and ethical global company. Integrity and trust are a cornerstone of our value proposition to our employees and clients. You will find truthfulness, candor, and integrity in everything we do. About GlobalLogic GlobalLogic, a Hitachi Group Company, is a trusted digital engineering partner to the world’s largest and most forward-thinking companies. Since 2000, we’ve been at the forefront of the digital revolution – helping create some of the most innovative and widely used digital products and experiences. Today we continue to collaborate with clients in transforming businesses and redefining industries through intelligent products, platforms, and services. Show more Show less
Posted 1 week ago
12.0 - 15.0 years
35 - 50 Lacs
Hyderabad
Work from Office
Skill : Java, Spark, Kafka Experience : 10 to 16 years Location : Hyderabad As Data Engineer, you will : Support in designing and rolling out the data architecture and infrastructure required for optimal extraction, transformation, and loading of data from a wide variety of data sources Identify data source, design and implement data schema/models and integrate data that meet the requirements of the business stakeholders Play an active role in the end-to-end delivery of AI solutions, from ideation, feasibility assessment, to data preparation and industrialization. Work with business, IT and data stakeholders to support with data-related technical issues, their data infrastructure needs as well as to build the most flexible and scalable data platform. With a strong focus on DataOps, design, develop and deploy scalable batch and/or real-time data pipelines. Design, document, test and deploy ETL/ELT processes Find the right tradeoffs between the performance, reliability, scalability, and cost of the data pipelines you implement Monitor data processing efficiency and propose solutions for improvements. • Have the discipline to create and maintain comprehensive project documentation. • Build and share knowledge with colleagues and coach junior profiles.
Posted 1 week ago
0 years
0 Lacs
Hyderabad, Telangana, India
On-site
Position Description Founded in 1976, CGI is among the world's largest independent IT and business consulting services firms. With 94,000 consultants and professionals globally, CGI delivers an end-to-end portfolio of capabilities, from strategic IT and business consulting to systems integration, managed IT and business process services, and intellectual property solutions. CGI works with clients through a local relationship model complemented by a global delivery network that helps clients digitally transform their organizations and accelerate results. CGI Fiscal 2024 reported revenue is CA$14.68 billion, and CGI shares are listed on the TSX (GIB.A) and the NYSE (GIB). Learn more at cgi.com. Position - Senior Software Engineer Experience - 4 - 7 Yrs Category - Software Development/Engineering Shift - 1 to 10 PM Location - BNG/HYD/CHN Position Id - J0125-0901 Work Type - Hybrid Employment Type - Full time Education - Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. Your future duties and responsibilities We are looking for a talented Data Engg to join our team. In this role, you will develop, optimize, and maintain scalable applications, and be responsible for building efficient, testable, and reusable code. Your work will involve collaborating with cross-functional teams to deliver high-quality software that meets our clients' needs. Write reusable, testable, and efficient code. Implement security and data protection solutions. Develop and maintain robust and scalable backend systems and APIs using Python. Integrate user-facing elements developed by front-end developers with server-side logic. Work with various databases (SQL, NoSQL) to ensure efficient data storage and retrieval. Required Qualifications To Be Successful In This Role Programing Language : Python, Pyspark Bigdata Tech – Data Bricks, Spark, Hadoop, Hive Cloud – AWS Database – RDBMS & No SQL Shell Scripting Bachelor’s or Master’s degree in Computer Science, Engineering, or a related field. CGI is an equal opportunity employer. In addition, CGI is committed to providing accommodation for people with disabilities in accordance with provincial legislation. Please let us know if you require reasonable accommodation due to a disability during any aspect of the recruitment process and we will work with you to address your needs. Together, as owners, let’s turn meaningful insights into action. Life at CGI is rooted in ownership, teamwork, respect and belonging. Here, you’ll reach your full potential because… You are invited to be an owner from day 1 as we work together to bring our Dream to life. That’s why we call ourselves CGI Partners rather than employees. We benefit from our collective success and actively shape our company’s strategy and direction. Your work creates value. You’ll develop innovative solutions and build relationships with teammates and clients while accessing global capabilities to scale your ideas, embrace new opportunities, and benefit from expansive industry and technology expertise. You’ll shape your career by joining a company built to grow and last. You’ll be supported by leaders who care about your health and well-being and provide you with opportunities to deepen your skills and broaden your horizons. Come join our team—one of the largest IT and business consulting services firms in the world. Show more Show less
Posted 1 week ago
4.0 years
0 Lacs
Kochi, Kerala, India
On-site
Introduction In this role, you'll work in one of our IBM Consulting Client Innovation Centers (Delivery Centers), where we deliver deep technical and industry expertise to a wide range of public and private sector clients around the world. Our delivery centers offer our clients locally based skills and technical expertise to drive innovation and adoption of new technology. Your Role And Responsibilities As Data Engineer, you will develop, maintain, evaluate and test big data solutions. You will be involved in the development of data solutions using Spark Framework with Python or Scala on Hadoop and AWS Cloud Data Platform Responsibilities Experienced in building data pipelines to Ingest, process, and transform data from files, streams and databases. Process the data with Spark, Python, PySpark, Scala, and Hive, Hbase or other NoSQL databases on Cloud Data Platforms (AWS) or HDFS Experienced in develop efficient software code for multiple use cases leveraging Spark Framework / using Python or Scala and Big Data technologies for various use cases built on the platform Experience in developing streaming pipelines Experience to work with Hadoop / AWS eco system components to implement scalable solutions to meet the ever-increasing data volumes, using big data/cloud technologies Apache Spark, Kafka, any Cloud computing etc Preferred Education Master's Degree Required Technical And Professional Expertise Minimum 4+ years of experience in Big Data technologies with extensive data engineering experience in Spark / Python or Scala. Minimum 3 years of experience on Cloud Data Platforms on AWS Experience in AWS EMR / AWS Glue / DataBricks, AWS RedShift, DynamoDB Good to excellent SQL skills Exposure to streaming solutions and message brokers like Kafka technologies Preferred Technical And Professional Experience Certification in AWS and Data Bricks or Cloudera Spark Certified developers Show more Show less
Posted 1 week ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Hive is a popular data warehousing tool used for querying and managing large datasets in distributed storage. In India, the demand for professionals with expertise in Hive is on the rise, with many organizations looking to hire skilled individuals for various roles related to data processing and analysis.
These cities are known for their thriving tech industries and offer numerous opportunities for professionals looking to work with Hive.
The average salary range for Hive professionals in India varies based on experience level. Entry-level positions can expect to earn around INR 4-6 lakhs per annum, while experienced professionals can earn upwards of INR 12-15 lakhs per annum.
Typically, a career in Hive progresses from roles such as Junior Developer or Data Analyst to Senior Developer, Tech Lead, and eventually Architect or Data Engineer. Continuous learning and hands-on experience with Hive are crucial for advancing in this field.
Apart from expertise in Hive, professionals in this field are often expected to have knowledge of SQL, Hadoop, data modeling, ETL processes, and data visualization tools like Tableau or Power BI.
As you explore job opportunities in the field of Hive in India, remember to showcase your expertise and passion for data processing and analysis. Prepare well for interviews by honing your skills and staying updated with the latest trends in the industry. Best of luck in your job search!
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.