Get alerts for new jobs matching your selected skills, preferred locations, and experience range. Manage Job Alerts
8.0 - 10.0 years
0 Lacs
hyderabad, telangana, india
Remote
Req ID: 336184 NTT DATA strives to hire exceptional, innovative and passionate individuals who want to grow with us. If you want to be part of an inclusive, adaptable, and forward-thinking organization, apply now. We are currently seeking a Software Development Senior Analyst to join our team in Hyderabad, Telangana (IN-TG), India (IN). Job Description: This position is focused on building efficient and robust data ingestion solutions within a cloud-based environment, specifically leveraging Azure Databricks. The role entails: Designing and implementing data ingestion pipelines: You will be responsible for architecting and developing data pipelines that can efficiently bring in data from various sources (such as databases, flat files, APIs, or streaming data) into the organizations Azure Databricks platform. The expectation is that these pipelines will be highly performant, scalable, and reliable, meeting the demands of large-scale data processing. Developing scalable and reusable frameworks: The position requires you to create frameworks and tools that enable the easy ingestion of different types of data (structured, semi-structured, unstructured) and formats (CSV, JSON, Parquet, etc.). These frameworks should promote code reuse, maintainability, and adaptability to new data sources, reducing repetitive development effort and ensuring consistency across data ingestion processes. Skills Required: Necessary: Strong expertise with 8+ years of experience in Azure Databricks: You should have in-depth, hands-on experience with Azure Databricks, including building and orchestrating Spark-based data pipelines, managing clusters, and working with notebooks and workflows. Apache PySpark: Proficiency in PySpark is essential for developing distributed data processing solutions within Databricks, enabling large-scale data transformation and ingestion tasks. Nice to Have: Azure Data Factory: Familiarity with Azure Data Factory is beneficial as it is often used for orchestrating and scheduling data workflows across Azure services. ADLS (Azure Data Lake Storage) / Key Vaults: Experience with Azure Data Lake Storage is valuable for handling large volumes of raw and processed data. Knowledge of Azure Key Vaults is advantageous for securely managing secrets and credentials required by data pipelines. About NTT DATA NTT DATA is a $30 billion trusted global innovator of business and technology services. We serve 75% of the Fortune Global 100 and are committed to helping clients innovate, optimize and transform for long term success. As a Global Top Employer, we have diverse experts in more than 50 countries and a robust partner ecosystem of established and start-up companies. Our services include business and technology consulting, data and artificial intelligence, industry solutions, as well as the development, implementation and management of applications, infrastructure and connectivity. We are one of the leading providers of digital and AI infrastructure in the world. NTT DATA is a part of NTT Group, which invests over $3.6 billion each year in R&D to help organizations and society move confidently and sustainably into the digital future. Visit us at us.nttdata.com Whenever possible, we hire locally to NTT DATA offices or client sites. This ensures we can provide timely and effective support tailored to each clients needs. While many positions offer remote or hybrid work options, these arrangements are subject to change based on client requirements. For employees near an NTT DATA office or client site, in-office attendance may be required for meetings or events, depending on business needs. At NTT DATA, we are committed to staying flexible and meeting the evolving needs of both our clients and employees. NTT DATA recruiters will never ask for payment or banking information and will only use @nttdata.com and @talent.nttdataservices.com email addresses. If you are requested to provide payment or disclose banking information, please submit a contact us form, https://us.nttdata.com/en/contact-us . NTT DATA endeavors to make https://us.nttdata.com accessible to any and all users. If you would like to contact us regarding the accessibility of our website or need assistance completing the application process, please contact us at https://us.nttdata.com/en/contact-us . This contact information is for accommodation requests only and cannot be used to inquire about the status of applications. NTT DATA is an equal opportunity employer. Qualified applicants will receive consideration for employment without regard to race, color, religion, sex, sexual orientation, gender identity, national origin, disability or protected veteran status. For our EEO Policy Statement, please click here . If you&aposd like more information on your EEO rights under the law, please click here . For Pay Transparency information, please click here . Show more Show less
Posted 1 week ago
3.0 - 7.0 years
0 Lacs
chennai, tamil nadu
On-site
As a Software Developer at FIS, you will have the opportunity to work on challenging and relevant issues in financial services and technology. Our team values curiosity, motivation, and forward-thinking, fostering an environment that is open, collaborative, entrepreneurial, passionate, and fun. In today's competitive private equity market, firms are under pressure to deliver superior returns while meeting stringent reporting requirements and increasing demands for information. As part of our team, you will play a crucial role in developing core versions of software applications for external clients. This includes identifying client purchasing requirements and technical specifications, collaborating with engineering groups for design changes, and providing training and support to clients on system applications. Key Responsibilities: - Develop core versions of software applications for sale to external clients - Identify client purchasing requirements and technical specifications - Collaborate with engineering groups for design changes - Train and communicate with clients on system applications Skills Required: - Proficiency in ETL tools such as Informatica PowerCenter - Awareness of Informatica Intelligent Cloud Services (IICS) - Extensive SQL skills - Familiarity with platform tools like Snap Logic, Python, Snowflake - Experience with Autosys scheduling, job dependencies, and alerting - Java skills for batch processing - Additional advantage with AWS Glue and Apache PySpark Qualifications: - BCA/BSC/BE/ME/B.Tech/M.Tech in Information Technology or Computer Science What We Offer: - A multifaceted job with high responsibility and various opportunities for professional and personal development - Competitive salary and benefits - Career development tools, resources, and opportunities Privacy Statement: FIS is dedicated to safeguarding the privacy and security of all personal information processed to provide services to clients. For more details on how FIS protects personal information online, refer to the Online Privacy Notice. Sourcing Model: Recruitment at FIS primarily follows a direct sourcing model. We do not accept resumes from recruitment agencies that are not on our preferred supplier list. FIS is not responsible for any fees related to resumes submitted through non-preferred recruitment agencies.,
Posted 2 weeks ago
5.0 - 10.0 years
0 - 0 Lacs
chennai
Remote
Job Title: Data Engineer PySpark & AWS Location: Chennai Employment Type: Full-Time with Artech Experience Level: 4-10 years About the Role: We are seeking a highly skilled Data Engineer with strong expertise in PySpark and AWS to join our growing data team. In this role, you will be responsible for building, optimizing, and maintaining data pipelines and ETL workflows on the cloud, enabling large-scale data processing and analytics. You will work closely with data scientists, analysts, and business stakeholders to ensure data is accessible, accurate, and reliable for advanced analytics and reporting. Key Responsibilities: Design, build, and maintain scalable and efficient data pipelines using PySpark and Apache Spark . Develop and manage ETL/ELT workflows to ingest data from multiple structured and unstructured sources. Implement data transformation, cleansing, validation, and aggregation logic. Work with AWS cloud services such as S3, Glue, EMR, Lambda, Redshift, Athena , and CloudWatch . Monitor data pipelines for performance, reliability, and data quality . Collaborate with cross-functional teams to understand business data needs and translate them into technical solutions. Automate data engineering tasks and infrastructure using tools like Terraform or CloudFormation (optional). Maintain and document data architecture, job logic, and operational processes. Required Skills: 4+ years of experience as a Data Engineer or in a similar role. Strong hands-on experience with PySpark and Apache Spark for distributed data processing. Proficiency in Python programming for data manipulation and automation. Solid understanding of AWS services for data engineering: S3, Glue, EMR, Redshift, Lambda, Athena, CloudWatch Experience with SQL and relational databases (e.g., PostgreSQL, MySQL). Knowledge of data modeling, warehousing, and partitioning strategies . Experience with version control (Git) and CI/CD practices . Nice to Have: Experience with workflow orchestration tools (e.g., Airflow, Step Functions). Familiarity with Docker/Kubernetes for containerized deployments. Exposure to NoSQL databases (DynamoDB, MongoDB). Experience with Terraform or CloudFormation for infrastructure automation. Knowledge of Delta Lake and data lake architecture best practices. Educational Qualifications: Bachelors or Masters degree in Computer Science, Information Technology, Engineering , or a related field.
Posted 1 month ago
Upload Resume
Drag or click to upload
Your data is secure with us, protected by advanced encryption.
Browse through a variety of job opportunities tailored to your skills and preferences. Filter by location, experience, salary, and more to find your perfect fit.
We have sent an OTP to your contact. Please enter it below to verify.
Accenture
73564 Jobs | Dublin
Wipro
27625 Jobs | Bengaluru
Accenture in India
22690 Jobs | Dublin 2
EY
20638 Jobs | London
Uplers
15021 Jobs | Ahmedabad
Bajaj Finserv
14304 Jobs |
IBM
14148 Jobs | Armonk
Accenture services Pvt Ltd
13138 Jobs |
Capgemini
12942 Jobs | Paris,France
Amazon.com
12683 Jobs |