IT Architect, Sr II
Bangalore, INDIA
1d ago

and Requirements

At Synopsys, we’re at the heart of the innovations that change the way we work and play. Self-driving cars. Artificial Intelligence.

The cloud. 5G. The Internet of Things. These breakthroughs are ushering in the Era of Smart Everything. And we’re powering it all with the world’s most advanced technologies for chip design and software security.

If you share our passion for innovation, we want to meet you.

Our Silicon Design & Verification business is all about building high-performance silicon chips faster. We’re the world’s leading provider of solutions for designing and verifying advanced silicon chips.

And we design the next-generation processes and models needed to manufacture those chips. We enable our customers to optimize chips for power, cost, and performance eliminating months off their project schedules.

A talented, energetic individual to join Synopsys IT organization as Sr.IT Architect, a global role from India for BU focused engineering support to manage , automate through innovative, modern scalable solution.

Acting as a liaison between the IT functional group, business & R&D teams, Play the lead role for IT architectural decisions, improve User Experience, drive farm optimization to ably support the growing needs for compute and disk, drive IT cost reduction effort technically by Architecting, Automating EDA Job scheduling environment, deliver analytics framework.

To have ability to translate R&D specific requirements and provide requirement specifications and cross functional collaboration for joint implementations.

Ability to leverage existing monitoring and metrics infrastructure, using API and build custom solutions for compute farm and disk related services, to support growth in job volume and disk capacity and keep optimal TAT for jobs with reduced wait time.

To provide a more efficient production environment, it is essential to know with hands on experience on Linux and queuing system software like LSF, UGE, SLURM or equivalent opensource tools, as the role requires to monitor, track, manage EDA workloads across multiple projects, analyse job metrics for deep insights and build analytical capabilities to scale up compute utilization, increase user experience with faster TAT of jobs.

Periodically review and analyse farm policy configurations for faster turnaround time for running jobs, automate job scheduling process and develop API from data available in elastic, recommend best practices time to time for use case model based on dynamic project needs, policy prescriptions and tracking violations, prioritizing, dynamically manage the compute resources based on different application / project requirements.

Establish continuous monitoring process and data visualization for effective server utilization across multiple secured clusters and propose dynamic resource allocations / re allocations based on project priority and resource utilization levels.

Use and enhance existing tools identify needs for automation for disk usage monitoring, alerting and managing the disk space effectively across users and project / product levels.

Review and recommend disk usage policy and track aged filesystem and plan reclaim of disk space with automatic clean-up.

Also understand data life cycle management to plan and implement periodical archival procedures.

Having a background / understanding in security contributes to manage end-to-end security of the environment, Identity and access management, data security, knowledge of privacy laws and regulations for different situations and circumstances as defined from business.

Work and align on organizational process for capacity planning and budgeting exercise. Knowledge on using ML based tools and innovative methods to develop models based on utilization metrics and other data available or capture from server farm.

Understanding the global use of the server farms by different R&D users, maintaining, enhancing, monitoring, reporting, and improving farm efficiency, manage SLA with IT team to enhance R&D productivity

Understand on organizational standards, Information security policies, develop / improve process around governance and establish periodic audit protocols for the secured cluster environment.

Knowledge on cloud concepts, Devops tools and programming skills can be an added advantage. Knowing the popular modern languages in the cloud / hybrid cloud environment like (XML with Java, Python, Elixir), Web Services and API for implementation of automation solutions and develop inhouse tools to meet business requirements.

Content presentation and communication skills are essential for frequent engagement with R&D and business stakeholders

Typically required is a minimum of 8 to 10 years of related experience. Possesses a full understanding of specialization area plus working knowledge of multiple related areas.

Independently resolves a wide range of issues in creative ways on a regular basis. Customarily exercises independent judgment in selecting methods and techniques to obtain solutions.

Being able to work in cross functional teams and organizations, handle ideas diplomatically, and solve problems as a group is essential.

Candidates should cultivate leadership skills often.

Position Requirements :

Summary : -

  • Ultimately, as an architect following are overarching goals :
  • Streamlining of day-to-day activities, simplify through architecting, automation and analytic framework and improve User Experience
  • Automate process around job scheduler and develop API
  • Providing a more efficient production environment
  • Lowering costs and gaining cost-effectiveness through resource optimization
  • Providing a secure, stable and supportable environment.
  • Skills / Knowledge :

  • In-depth understanding of job schedulers like LSF, UGS, SLURM or equivalent Linux opensource tools, Linux operating system, including kernel, memory, process, threads, cgroups etc.
  • Automation skills including scripting skills in Shell / Perl / Python
  • Programming language skills : JavaScript, Bash, Ruby, etc.
  • Datastores : MySQL / PostgreSQL / MongoDB, ElasticSearch / Redis / Memcached.
  • Automation using Tools such as Packer / Vagrant, Ansible / Puppet, CloudFormation, Docker / Docker Compose / Kubernetes.
  • Cloud : AWS / Azure / OpenStack / GCP.
  • Experience with Monitoring and Logging tools such as Nagios and the ELK stackU.
  • Education :

    BTech / MTech / Master’s Degree in Computer Science.

    Inclusion and Diversity are important to us. Synopsys considers all applicants for employment without regard to race, color, religion, national origin, gender, sexual orientation, gender identity, age, military veteran status, or disability.


    Report this job

    Thank you for reporting this job!

    Your feedback will help us improve the quality of our services.

    My Email
    By clicking on "Continue", I give neuvoo consent to process my data and to send me email alerts, as detailed in neuvoo's Privacy Policy . I may withdraw my consent or unsubscribe at any time.
    Application form