Get alerts for new jobs matching your selected skills, preferred locations, and experience range.
4.0 - 8.0 years
8 - 13 Lacs
Hyderabad, Pune, Bengaluru
Hybrid
Role Technology Lead No of years experience 5+ Detailed job description - Skill Set: Role Summary: As part of the offshore development team, the AWS Developers will be responsible for implementing ingestion and transformation pipelines using PySpark, orchestrating jobs via MWAA, and converting legacy Cloudera jobs to AWS-native services. Key Responsibilities: Write ingestion scripts (batch & stream) to migrate data from on-prem to S3. Translate existing HiveQL into SparkSQL/PySpark jobs. Configure MWAA DAGs to orchestrate job dependencies. Build Iceberg tables with appropriate partitioning and metadata handling. Validate job outputs and write unit tests. Required Skills: 35 years in data engineering, with strong exposure to AWS. Experience in EMR (Spark), S3, PySpark, SQL. Working knowledge of Cloudera/HDFS and legacy Hadoop pipelines. Prior experience with data lake/lakehouse implementations is a plus Mandatory Skills AWS Developer
Posted 1 week ago
8.0 - 12.0 years
12 - 18 Lacs
Bengaluru
Work from Office
As a Data Architect, you are required to: Design & develop technical solutions which combine disparate information to create meaningful insights for business, using Big-data architectures Build and analyze large, structured and unstructured databases based on scalable cloud infrastructures Develop prototypes and proof of concepts using multiple data-sources and big-data technologies Process, manage, extract and cleanse data to apply Data Analytics in a meaningful way Design and develop scalable end-to-end data pipelines for batch and stream processing Regularly scan the Data Analytics landscape to stay up to date with latest technologies, techniques, tools and methods in this field Stay curious and enthusiastic about using related technologies to solve problems and enthuse others to see the benefit in business domain Qualification : Bachelor's or Master's in Computer Science & Engineering, or equivalent. Professional Degree in Data Engineering / Analytics is desirable. Experience level : Minimum 8 years in software development with at least 2 - 3 years hands-on experience in the area of Big-data / Data Engineering. Desired Knowledge & Experience: Data Engineer - Big Data Developer Spark: Spark 3.x, RDD/DataFrames/SQL, Batch/Structured Streaming Knowing Spark internals: Catalyst/Tungsten/Photon Databricks: Workflows, SQL Warehouses/Endpoints, DLT, Pipelines, Unity, Autoloader IDE: IntelliJ/Pycharm, Git, Azure Devops, Github Copilot Test: pytest, Great Expectations CI/CD Yaml Azure Pipelines, Continuous Delivery, Acceptance Testing Big Data Design: Lakehouse/Medallion Architecture, Parquet/Delta, Partitioning, Distribution, Data Skew, Compaction Languages: Python/Functional Programming (FP) SQL: TSQL/Spark SQL/HiveQL Storage: Data Lake and Big Data Storage Design Additionally it is helpful to know basics of: Data Pipelines: ADF/Synapse Pipelines/Oozie/Airflow Languages: Scala, Java NoSQL: Cosmos, Mongo, Cassandra Cubes: SSAS (ROLAP, HOLAP, MOLAP), AAS, Tabular Model SQL Server: TSQL, Stored Procedures Hadoop: HDInsight/MapReduce/HDFS/YARN/Oozie/Hive/HBase/Ambari/Ranger/Atlas/Kafka Data Catalog: Azure Purview, Apache Atlas, Informatica Big Data Architect Expert: in technologies, languages and methodologies mentioned in Data Engineer - Big Data Developer Mentor: mentors/educates Developers in technologies, languages and methodologies mentioned in Data Engineer - Big Data Developer Architecture Styles: Lakehouse, Lambda, Kappa, Delta, Data Lake, Data Mesh, Data Fabric, Data Warehouses (e.g. Data Vault) Application Architecture: Microservices, NoSql, Kubernetes, Cloud-native Experience: Many years of experience with all kinds of technology in the evolution of data platforms (Data Warehouse -> Hadoop -> Big Data -> Cloud -> Data Mesh) Certification: Architect certification (e.g. Siemens Certified Software Architect or iSAQB CPSA) Required Soft-skills & Other Capabilities: Excellent communication skills, in order to explain your work to people who don't understand the mechanics behind data analysis Great attention to detail and the ability to solve complex business problems Drive and the resilience to try new ideas, if the first ones don't work Good planning and organizational skills Collaborative approach to sharing ideas and finding solutions Ability to work independently and also in a global team environment.
Posted 3 weeks ago
6 - 9 years
8 - 11 Lacs
Chennai, Bengaluru, Hyderabad
Work from Office
As part of DMBM-BI, you will be helping to create and deliver a comprehensive measurement and reporting approach for all of Verizon Consumer Groups. In this role, you will interact with cross-functional teams working throughout Verizon bringing new experiences to life for our Customers. You will help in measurement, and reporting for cross-functional teams as they plan, build, and launch world-class experiences. You will help translate raw data into actionable insights and better experiences for our customers. Your deep knowledge of measurement solutions will help to determine the best approaches for implementations that best meet business needs. Working closely with the NBx/Pega Business teams and deliver reporting stories each release, and where required build new dashboards in Tableau or Qlik Sense Contributing to requirement sessions with key stakeholders and actively participate in grooming sessions with business teams Defining new metrics and business KPIs. Creating wireframes and mockups of reporting dashboards. Documenting all validated standards and processes to ensure accuracy across the enterprise. Collaborating with cross-functional teams to resolve NBx proposition anomalies and actively contribute to production defect resolutions. What were looking for You are a strong collaborator who can effectively own and prioritize multiple work streams and adapt during sometimes pressured situations. You display initiative and resourcefulness in achieving goals but are comfortable brainstorming and sharing ideas in a team environment. You will have excellent communication skills and the ability to speak effectively to internal and external stakeholders. You can partner across multiple business and technology teams. You should have strong Business Intelligence and analytics experience in CX (Customer Experience) area/root cause analytics with attention to detail, be adaptable to change and tight deadlines, and be focused on quality. Ability to mine, extract, transform, load large data sets, and create concise readouts and analyses based on the actionable insights found in the data. Youll need to have: Bachelors degree and Six or more years of work experience. Six or more years of relevant work experience. Experience with SQL and SQL performance tuning. Experience with Tableau and Qlik Sense. Experience with data modeling for different data sources in Tableau or Qlik Sense. Knowledge of Google Suite and database management systems. Experience with dashboard creation with insightful visualization. Knowledge of OneJira or any ticketing tool. Even better if you have one or more of the following: Experience with third-party reporting tools (e.g., ThoughtSpot, IBM Cognos, Looker tools). Exposure to HiveQL, GCP Big Query, Teradata, and Oracle databases. Basic knowledge of programming languages (e.g., VBA/Python). Ability to derive insights from data and recommend action. Knowledge of end-to-end ETL process. Where youll be workingIn this hybrid role, you'll have a defined work location that includes work from home and assigned office days set by your manager.Scheduled Weekly Hours40
Posted 3 months ago
5 - 8 years
7 - 10 Lacs
Chennai, Pune
Work from Office
5+ years of hands-on experience in designing, building and supporting Data Applications using Spark, Sqoop and Hive Bachelors or masters degree in Computer Science or related field Strong knowledge of working with large data sets and high-capacity big data processing platform Strong experience in Unix and Shell scripting Advanced knowledge of the Hadoop ecosystem and its components In-depth knowledge of Hive, Shell scripting, Python, Spark Ability to write MapReduce jobs Experience using Job Schedulers like Autosys Hands on experience in HiveQL Good knowledge on Hadoop Architecture and HDFS Strong knowledge of working with large data sets and high-capacity big data processing platform Strong experience in Unix and Shell scripting Experience with jenkins for Continuous Integration Experience using Source Code and Version Control Systems like Bitbucket, Git Good to have experience on Agile Development Responsibilities : Develop components, application interfaces, and solution enablers while ensuring principal architecture integrity is maintained Ensures solutions are well designed with maintainability/ease of integration and testing built-in from the outset Participates and guides team in estimating work necessary to realize a story/requirement through the software delivery lifecycle Responsible for developing and delivering complex software requirements to accomplish business goals Ensures that software is developed to meet functional, non-functional, and compliance requirements Codes solutions, Unit testing and ensure the solution can be integrated successfully into the overall application/system with clear robust, and well-tested interfaces Required Skills : Hadoop, Hive, HDFS, Spark, Python, Un
Posted 3 months ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
36723 Jobs | Dublin
Wipro
11788 Jobs | Bengaluru
EY
8277 Jobs | London
IBM
6362 Jobs | Armonk
Amazon
6322 Jobs | Seattle,WA
Oracle
5543 Jobs | Redwood City
Capgemini
5131 Jobs | Paris,France
Uplers
4724 Jobs | Ahmedabad
Infosys
4329 Jobs | Bangalore,Karnataka
Accenture in India
4290 Jobs | Dublin 2